Monday, May 29, 2023

Prophetic Dialogue as Approach to the Church’s Engagement with Stakeholders of the Technological Future (P.2/5)

 


2. Effects of Technological Shifts on Humanity

William Gibson, a science fiction writer widely credited with creating a subgenre called cyberpunk, said, “Technologies are morally neutral until we apply them.”[17] Much of the consequences that come out will depend on the intent and the actual application of those individuals and organizations who decide to make use of the technological inventions. Despite the many positive effects that technological advances will bring to human life in terms of health, economic and social benefits, negative consequences can also appear, taking humanity towards undesired directions.

2.1. Loss of Privacy

Living in a digitally inter-connected world, while there are enormous benefits in being able to share and access information, the risks of loss of privacy is real. In the age of Google search and social media, we are already forced to make Faustian bargains in order to be able to use free and convenient services provided by a few powerful organizations. Every search we make, every like we hit, every comment we post, every video we click, every product we seek, information about us is being noted by computer algorithms, which based on all the collected data about us, will provide us with product advertisements and content that will most likely appeal to our interests and sensibility.

Loss of privacy is not just about companies trying to make a profit by harvesting personal information about us. Technological developments can lead to a society where we are unceasingly being observed by the authority in the name of maintaining social order. According to the report “The Global Expansion of AI Surveillance,” the proliferation of AI around the world is taking place at a rapid pace. “A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.”[18] It is not surprising that China, an authoritarian state, is the most significant driver of AI surveillance worldwide. Huawei alone provides surveillance technology to at least 50 countries. This compares to the largest Japanese company NEC Corporation with 14 countries, and the largest American company IBM with 11 countries. Because of China’s long experience of building a system of digital surveillance with its own citizens, its know-how is highly sought after by regimes bent on curtailing individual freedom and civil liberties.[19] In the future, the development of AI and Internet of Things (IoT) may lead to the biggest surveillance network humanity has ever experienced. Everyone will be monitored and tracked everywhere, all the time and from every angle.

2.2. Job Displacement

The development of AI and automation will no doubt have significant impact on the job market, where job loss will be seen in both the blue as well as the white-collar sectors. In the digital revolution, if robots could do most things that people can do—and a better job at that—the number of jobs left for human will be severely limited. Business owners set their priority at profit making, which makes investment in robots that do not take a salary, break, sick or maternity leave an obvious choice.

It is not hard science, however, how many jobs will be lost due to automation. In 2013, Oxford academics Carl Benedikt Frey and Michael A. Osborne co-authored a widely cited paper (over 4,000 times) stating that by the mid-2030s, 47 percent of US jobs are at high risk of AI replacement. The top 5 included telemarketers, insurance underwriters, sports referees, cashiers and chefs.[20] However, recently, the authors published an article emphasizing that the idea that half of the jobs in the US will be automated is a misunderstanding of the paper. The authors clarified that the study referred to did not make predictions, but rather provided an estimate of how susceptible existing jobs are to automation from the development of artificial intelligence and mobile robotics. The study indicated that 47% of jobs are technically automatable. The study has been misrepresented to suggest an employment apocalypse, when in fact it only showed the vast potential for automation, similar to what happened at the onset of the Second Industrial Revolution.[21]

In reality, the impact of AI and automation on jobs will vary according to various economies. There are three possible results: eliminating jobs, enhancing existing jobs, and creating new jobs. A study of 11 Asian economies by MIT Technology Review, predicts that in the next five years, on average automation will result in elimination of 12 percent of jobs while 8 percent of the jobs will be enhanced by AI capabilities. In all but one economy, the number of jobs eliminated exceed those created. Emerging markets such as Vietnam and Indonesia where the manufacturing sector is large and process-driven, jobs eliminated outnumber jobs enhanced by two to one.[22]

2.3. Growing Inequality

Even though technological progress has created opportunities for many, there is a worrisome trend towards increasing global inequality. According to Oxfam, in 2018 the 26 richest people in the world owned as much as 50 percent of the bottom half in the world. Between 2017 and 2018, a new person was added to the global billionaire list every two days. Yet, nearly half of humanity was making it on less than 5.50 USD a day.[23] There is concern that technological progress, instead of mitigating this situation, ends up exacerbating it even more because most of the wealth and power is concentrated in global tech or technology related giants such as Microsoft, Apple and Amazon. Development of AI and automation will result in profits skyrocketing for these firms while people are losing jobs and regular income. The irony is that AI will most likely be used by powerful companies to figure out how to selectively address various issues plaguing human society, at the same time maximize productivity and profit for themselves, but not employed to address the very problem of inequality which the very development of AI is a significant contributing factor.

Some have even envisioned a future in which inequality will be prevalent due to unequal access to certain revolutionary technological developments. The Israeli historian Yuval Noah Harari presents a possibility where AI is applied to bioengineering which helps humans to upgrade their physical health, lengthen their life span, and increase their cognitive abilities. In this case, wealthy people will take advantage of these opportunities while the poor are left behind, creating a giant gap between them. Human society might be split into biological castes with real differences between the haves and the have-nots.24] Needless to say, the rich and powerful, or the superhuman elite will constitute a small percentage of the total population; however, they are the ones who control and have access to all the data, the most important asset of the digital era. It is no wonder why tech giants such as Google, Facebook, Baidu and Tencent are racing to amass as much data as they possibly can. Every little bit of data that we are giving (voluntarily or unknowingly) to these companies by using their free email services, chatting and photo sharing apps, and entertainment platforms is contributing to their long term aim at data hegemony. The more personal data are given to governments and corporations, the more risk we have in losing our social, economic and political control to them.

2.4. Social Instability

Andrew Yang, former 2020 US Presidential candidate, once remarked, “All you need is self-driving cars to destabilize society. That one innovation will be enough to create riots in the street. And we’re about to do the same thing to retail workers, call center workers, fast-food workers, insurance companies, accounting firms.”[25] The outlook proposed by Yang and others has been characterized as “robot apocalypse,” which warns of a rather extreme and pessimistic economic and social future. Although reports have painted both rosy as well as gloomy job futures as a result of automation, many have become increasingly uneasy with monopolistic behavior exhibited by tech giants who are trying to build ultra-efficient machines and other instruments to augment human life. While benefits brought about by technological progress is undeniable, questions arise as to whether the negative impact will result in social instability when new genetic treatments that could extend one’s longevity could only be accessed by the wealthy elite. Likewise, will the access to brain-computer interface (BCI) implants help the rich to gain cognitive capability but cause resentment among the underprivileged and lead to civil unrest and terrorism? Social instability in the forms of protests, demonstrations, riots and violent uprisings can result not only from massive number of people’s livelihood being severely affected by a sudden influx of robots but also from the resentment towards very real inequality—economically, socially, politically, and even biologically—brought about by technological progress.

2.5. Dependence on Technology

Dependence on technology is a major concern in today’s society, as individuals and societies increasingly rely on digital technologies to perform various tasks and solve problems. While technology has undoubtedly brought numerous benefits, including increased efficiency and convenience, its overreliance can lead to a range of potential risks and challenges. One of the primary risks associated with technology dependence is the vulnerability to system failures and cyber attacks.[26] As we become more reliant on technology, a large-scale power outage or cyber attack on critical infrastructure systems could have severe consequences. Moreover, the use of technology to automate and simplify tasks could reduce critical thinking skills, leading to a lack of creativity and innovation in society.

Another issue arising from technology dependence is the reduction in face-to-face interaction. With the increasing use of technology for communication and social interaction, people may miss out on the benefits of human connection, which is an essential aspect of social well-being. Over-reliance on algorithms and AI is another risk of technology dependence. The more sophisticated and capable these technologies become, the more likely we are to rely on them to make decisions and solve problems, which could lead to a lack of human judgment and accountability. Social isolation is another concern that comes with the use of technology. Individuals who are already vulnerable to social isolation and loneliness may be further disconnected from society due to their dependence on technology.[27]

2.6. Environmental Impact

The rapid pace of technological development is bringing about significant changes in society, but it also has a significant impact on the environment.[28] The energy consumption required for digital technologies is one of the main sources of environmental damage. Data centers and large-scale computing systems require a considerable amount of energy to operate, contributing to greenhouse gas emissions and global warming. This growing trend highlights the need for more sustainable practices in the technology industry, as well as the need for more energy-efficient and environmentally-friendly designs.

Another major concern arising from technological development is the increase in e-waste generated by discarded gadgets and devices. E-waste contains toxic materials that can harm the environment and human health if not disposed of properly. The production of digital technologies also requires the consumption of natural resources, such as minerals and metals, often extracted in environmentally damaging ways. The construction of data centers and other technology infrastructure requires large amounts of land, which can lead to deforestation and other environmental damage. The transportation of goods and people associated with the technology industry also contributes to air pollution, particularly in areas with high concentrations of tech companies and workers. The significant amount of water needed for cooling and other purposes in data centers and other technology infrastructure can also lead to water scarcity and other environmental issues.

This reality has prompted a need for promoting digital sustainability which refers to the concept of designing, producing, and using digital technologies in a way that minimizes their negative impact on the environment and promotes long-term sustainability.[29] This involves implementing practices that reduce energy consumption, minimize waste, use environmentally-friendly materials, and reduce the carbon footprint of digital technologies. Digital sustainability also encompasses the responsible use of technology by individuals and organizations, including reducing e-waste, promoting recycling, and using technology in a way that reduces overall environmental impact. The goal of digital sustainability is to ensure that the benefits of digital technology can be enjoyed without compromising the health of the planet and its resources.

2.7. World Peace

World organizations such as the United Nations, the World Bank and the Red Cross are placing their hopes on AI’s ability to process massive amount of data and analyze complex factors that contribute to humanitarian disasters such as famine, mass migration as well as political and religious conflicts among peoples and nations.[30] By predicting where and when these happenings would take place based on available information, efforts could be made in order to thwart the realization of such calamities. There is an optimistic thinking that Big Data and tools to analyze them will ultimately lead to solving many issues plaguing the world.

Although there are researchers such as Timo Honkela who are trying to find ways to build a “peace machine” that can bridge human divides in language, culture and emotions,[31] others are building AI based weapons. According to an article in Forbes magazine,
The rapid development of AI weaponization is evident across the board: navigating and utilizing unmanned naval, aerial, and terrain vehicles, producing collateral-damage estimations, deploying “fire-and-forget” missile systems and using stationary systems to automate everything from personnel systems and equipment maintenance to the deployment of surveillance drones, robots and more are all examples.[32]
Many individuals, organizations and governments are involved in the debate around ethical issues surrounding AI and autonomous weapon systems. The UN Secretary-General António Guterres stated in a Tweet on 26 March 2019, “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”[33] To date, 28 countries have called for prohibition of fully autonomous weapons also known by the ominous title of “killer robots.”[34] Notably among them is the Holy See and China. China, however, only calls for banning the use of fully autonomous weapons but not for their development and production. Peter Singer, a specialist in 21st century warfare commented, “They’re simultaneously working on the technology while trying to use international law as a limit against their competitors.”[35] It is not surprising considering China is on the list of countries alongside the US, Russia, the UK, Israel, and South Korea who are involved in the rat race in developing these types of weapons.[36] The international diplomatic process to set up norms on their development and use has, unfortunately, been much slower than the pace by which the competitors have engaged in their research and development. The commitment of the United States to this race is undoubtedly stemmed by China’s own determination to challenge American place as the world’s superpower. President Xi Jinping has set the goal for China to become the world leader in AI by 2030 with its military innovation an essential part of this agenda.

2.8. Responsibility and Accountability

As technology becomes more integrated into our daily lives, there are growing concerns about responsibility and accountability.[37] One of the primary issues is determining who is responsible for the outcomes of advanced technologies such as artificial intelligence (AI). For instance, if an autonomous vehicle causes an accident, should the manufacturer, owner, or software developer be held accountable? The ethical considerations associated with the use of advanced technologies, including AI, robotics, and biotechnology, are also growing. As these technologies become more sophisticated, they raise questions about privacy, security, and human rights implications. The concern for bias and discrimination in algorithms and AI-based technologies is also increasing, with facial recognition technology having higher error rates for people of color, raising concerns about racial bias.

Cybersecurity threats are another major issue with more sensitive information being stored and transmitted online. State actors, hackers, and cybercriminals can exploit vulnerabilities in systems, raising concerns about how to prevent and mitigate these risks. As technology companies become more powerful, there are concerns about their social responsibility. Critics argue that social media platforms have contributed to the spread of misinformation and polarization, and that these companies need to do more to address these issues.[38] Lastly, there is a growing need for regulation and governance of advanced technologies to ensure they are used responsibly and ethically. However, there is also concern about the potential for overregulation and stifling innovation.

2.9. Impact on Religion

Trends in the development of digital technologies is expected to have a significant impact on faith and religiosity in various ways. One significant impact is the greater access to religious information, including texts, teachings, and traditions from diverse sources. This could lead to a greater understanding and appreciation of different religions and faiths, and provide opportunities for interfaith dialogue and collaboration. Another impact of digital development is the potential for new forms of religious expression. For instance, virtual reality may be used to create immersive religious experiences, while social media platforms could be used to share and discuss religious ideas and practices.

However, one growing concern among some religious leaders is the erosion of traditional religious authority due to digital development trends. Digital technologies have made it easier for people to access information and ideas from a variety of sources, potentially undermining the authority of traditional religious institutions and leaders. For example, access to non-traditional or non-mainstream sources of religious knowledge may create confusion and uncertainty about which sources are legitimate or authoritative, leading some people to question the authority of traditional religious institutions and leaders.[39] Moreover, the trend towards decentralization in digital development may also challenge traditional religious authority structures. Individuals and smaller religious groups may have greater access to religious texts and teachings, potentially sharing their own interpretations and ideas with a wider audience.[40] This can potentially challenge the authority of traditional religious institutions and leaders, who may be seen as less relevant or necessary in a more decentralized religious landscape.

Similarly, disintermediation in digital development may also impact religious authority. This refers to the use of digital platforms and tools to connect individuals directly with religious texts, teachings, and communities, bypassing traditional intermediaries such as religious leaders, institutions, or structures. This may empower individual believers to explore and interpret religious texts and teachings in new ways without relying on traditional religious authorities to provide guidance or interpretation, leading to greater fragmentation and diversity within religious communities.

In essence, digital technology has had and will continue to have a significant impact on human society, with both positive and negative consequences. While the benefits are undeniable, it's crucial to acknowledge and address the potential downsides. This section delved into the possible negative outcomes not to discredit technology’s potential and realized benefits but to recognize that scientific and technological progress, despite being aimed at human betterment, isn't immune to risks and dangers. Although scientists and technologists may intend to create positive change, the application of their discoveries takes place within a complex framework of politics, economics, and society. This construct is rife with individuals and institutions vying for power, wealth, and control, and it’s often fueled by personal biases and agendas. Thus, technological advancements, even with good intentions, may cause unintended harm. It can erode our privacy, disconnect us from reality, and hinder our capacity for deep thought and introspection. These present and potential risks highlight the need for a critical and mindful approach to technological progress and its impact on human and environmental well-being. Therefore, we must strike a balance between the pursuit of technological advancements and the promotion of human and ecological flourishing. The key is to approach progress with discernment, always considering its potential impact on society and the planet. Only then can we ensure that technology serves us rather than the other way around.

_________
  17. William Gibson Interview, http://josefsson.net/gibson/
18. Steven Feldstein, “The Global Expansion of AI Surveillance,” Carnegie Endowment for International Peace (17 September 2019), https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.
19. Alina Polyakova and Chris Meserole, “Exporting digital authoritarianism: the Russian and Chinese Models,” Brookings Institute, https://www.brookings.edu/wp-content/uploads/2019/08/FP_20190827_digital_authoritarianism_polyakova_meserole.pdf.
20. Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?” Technological Forecasting and Social Change 114 (2017): 254-280.
21. Carl Benedikt Frey and Michael A. Osborne, “Automation and the Future of Work—Understanding the Numbers,” Oxford Martin School, April 13, 2018, https://www.oxfordmartin.ox.ac.uk/blog/automation-and-the-future-of-work-understanding-the-numbers/.
22. “Asia’s AI agenda—AI and human capital,” MIT Technology Review (2019), https://s3-ap-southeast-1.amazonaws.com/mittr-intl/AsiaAItalent.pdf.
23. Oxfam, https://www.oxfam.org/en/5-shocking-facts-about-extreme-global-inequality-and-how-even-it
24. Yuval Noah Harari, 21 Lessons for the 21st Century (New York: Spiegel & Grau, 2018): Kindle edition.
25. Kevin Roose, “His 2020 Campaign Message: The Robots Are Coming,” New York Times, February 10, 2018, https://www.nytimes.com/2018/02/10/technology/his-2020-campaign-message-the-robots-are-coming.html?rref=collection%2Fsectioncollection%2Ftechnology&fbclid=IwAR0rPXVy5qafyLiJXqKFnKiBgid3mbm45acWoDTaiIMYSkzJ3J3bNuk9oXc
26. Ker Bailey, Andrea Del Miglio, and Wolf Richter, “The Rising Strategic Risks of Cyberattacks,” McKinsey Quarterly, May 1, 2014, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-rising-strategic-risks-of-cyberattacks
27. Medicial News Today, “Negative Effects of Technology: What to Know,” February 25, 2020, https://www.medicalnewstoday.com/articles/negative-effects-of-technology.
28. Geneva Environment Network, “Data, Digital Technology, and the Environment,” Geneva Environment Network, November 25, 2021, https://www.genevaenvironmentnetwork.org/resources/updates/data-digital-technology-and-the-environment/.
29. Rebellion Research, “What Is Digital Sustainability?” Rebellion Research, July 24, 2021, https://www.rebellionresearch.com/what-is-digital-sustainability.
30. “How AI Could Unlock World Peace,” BBC, February 19, 2019, https://www.bbc.com/future/article/20190219-how-artificial-intelligence-could-unlock-world-peace
31. Niko Nurminen, “Could Artificial Intelligence Lead to World Peace?” Aljazeera, May 30, 2017, https://www.aljazeera.com/indepth/features/2017/05/scientist-race-build-peace-machine-170509112307430.html
32. Jayshree Pandya, “The Weaponization of Artificial Intelligence,” Forbes, January 14, 2019, https://www.forbes.com/sites/cognitiveworld/2019/01/14/the-weaponization-of-artificial-intelligence/#7834fc433686
33. “Autonomous weapons that kill must be banned, insists UN chief,” UN News, March 25, 2019, https://news.un.org/en/story/2019/03/1035381
34. “Campaign to Stop Killer Robots,” November 22, 2018, https://www.stopkillerrobots.org/wp-content/uploads/2018/11/KRC_CountryViews22Nov2018.pdf
35. Melissa K. Chan, “China and the US are Fighting a Major Battle over Killer Robots and the Future of AI,” Time, September 13, 2019, https://time.com/5673240/china-killer-robots-weapons/
36. Justin Rohrlich, “Report: Kill the Idea of Killer Robots before They Kill Us,” Quartz (9 May 2019), https://qz.com/1614684/killer-robots-must-be-stopped-pax-tells-the-world/
37. See Mark Coeckelbergh, AI Ethics (Boston, MA: MIT Press, 2020): Kindle edition.
38. Ally Daskalopoulos et al., “Thinking Outside the Bubble: Addressing Polarization and Disinformation on Social Media,” CSIS, September 27, 2021, https://journalism.csis.org/thinking-outside-the-bubble-addressing-polarization-and-disinformation-on-social-media/.
39. Anthony Le Duc, “The Way, the Truth and the Life: Asian Religious Communication in the Post-Truth Climate,” Religion and Social Communication 16, no. 1 (2018): 28-29.
40. Le Duc, “The Way, the Truth and the Life,” 29.

No comments:

Post a Comment