Artificial Intelligence Risks Raise Awareness

Artificial Intelligence Risks: As the importance and visibility of artificial intelligence (AI) grows, concerns about the negative consequences of using these technologies are also increasing.

Faced with the risks and harmful effects, governments, researchers, civil associations, and even companies have been discussing the care and measures needed to mitigate these deadly results.

The range of risks and dangers is diverse. In addition to the most notorious issue of the future of work, possible complications range from discrimination against specific segments to the very change in the notion of humanity and its prevalence over machines, including threats to privacy and damage to the organization of markets and abuses at work. Of weapons.

Artificial intelligence involves complex processing that demands a large amount of data for effectiveness. Therefore, the proper functioning of these systems, and eventual gains arising, press for a growing collection of information. Such solutions can amplify the already strong concern for protecting personal data.

In August of last year, it was made public that Google was working on a project (Project Nightingale). It collected data from millions of patients in the United States through agreements with companies without these people knowing. Google is one of the leading companies in this field, using artificial intelligence in several products, from the search engine to the translator, among others.

Discrimination

Since artificial intelligence systems have been gaining ground in diverse analyses and decisions, these options have started to affect people’s lives directly, including discriminating against certain groups in different processes, such as hiring employees, granting loans, access to rights, and benefits policing. One of the most controversial applications is facial recognition mechanisms, which can determine whether a person can receive aid, check in, or even go to jail.

In China, a tool called Sense Video went on sale last year with features for face and object recognition. But the most controversial initiative has been using cameras to monitor citizens’ acts and movements to establish social notes for each person, which can be used for different purposes, including differentiating access to services or even generating sanctions.

Concentration

The ability to influence this varied range of decisions is even more remarkable when a given agent has market power. In the technology area, large conglomerates dominate their market niches and lead the adoption of artificial intelligence, such as Microsoft, Amazon, Google, Apple, and Facebook in the United States, and Tencent and Alibaba in China.

Google has more than 90% of the search market and more than 70% of the browser market, as well as controlling the largest audiovisual platform in the world, YouTube. Facebook has reached 2.5 billion users and owns the top three apps in the world: Facebook, FB Messenger, WhatsApp, and Instagram.

The more market share and users, the more data is collected. And suppose this personal information is considered by international bodies such as the Organization for Economic Co-operation and Development (OECD) and the World Economic Forum (WEF) to be a critical economic asset. In that case, those who control broad bases will have a crucial competitive advantage, which can reinforce your domain and harm competitors.

Autonomous Weapons

One of the risks receiving the most attention has been the growth of smart weapons such as drones and autonomous tanks, described as the third revolution in warfare after gunpowder and nuclear weapons. Between 2000 and 2017, they rose from 2 to over 50 worldwide. The countries that most developed these machines are the United States, Israel, Russia, France, and China. Such devices increase the risks of autonomous decisions since these start to involve the decision about the life and death of individuals.

Ethical Principles

This comprehensive set of risks and controversies gave rise to the expansion of the debate on the care and measures to be taken to mitigate such threats and enhance the use of intelligent machines for appropriate purposes. Hundreds of researchers signed a document called Asilomar Principles about a conference on the topic held in California in 2017.

The text affirms principles such as security, responsibility, privacy, and the benefit of society. It also points out care such as explanation and auditing capacity in case of decisions, failures, and deviations (including in the legal field), respect for human values ​​and rights, sharing of benefits and economic gains promoted by intelligent machines, human control, and non-subversion of human dynamics.

Also Read: AI Brings Efficiency To Counting Processes

Tech My Geek: Techmygeek is one of the most trusted online portals that publish genuine news about the latest advancements and technological changes.