Categories
Artificial Intelligence

Consequences of AI – 2023

Elon Musk, the founder and CEO of Tesla and SpaceX, and renowned physicist Stephen Hawking have both expressed concern that AI may be extremely harmful.  Bill Gates, a co-founder of Microsoft, agrees that there is need for caution, but that with careful management, the positives can outweigh the negatives. The moment has come to identify the hazards that artificial intelligence poses because recent advancements have brought the possibility of super-intelligent machines much sooner than originally anticipated.

Leading individuals at Oxford and UC Berkeley, as well as many AI researchers, agree with Hawking and Musk that if advanced AI systems are used recklessly, they could irreversibly cut off human society from a bright future.

Since the early days of computing, this issue has been brought up. But in recent years, it has become increasingly clear what we can do with AI, what AI can do for (and to) us, and how much we still don’t know. This is mostly due to improvements in machine-learning techniques.

We use the term “AI” to describe a wide range of concepts, which has led to confusion, misrepresentation, and people talking over one another in discussions concerning AI. SO, here is the overall picture of how artificial intelligence may present a grave threat.

Also visit –

  1. 2 Artificial Intelligence Stocks for Future : Palantir & CrowdStrike – 2022

2. Artificial Intelligence & Energy Sector-2022

3.Artificial Intelligence’s Effects on Humanity-2022

What is AI ?

Artificial Intelligence

The goal of artificial intelligence is to build computers with intelligent behaviour. It is a general catch-all phrase that is used to describe everything from Siri to IBM’s Watson to potent future technologies.

Some researchers distinguish between “narrow AI,” which refers to computer systems that are superior to humans in a single, clearly defined field, such as playing chess, creating images, or diagnosing cancer, and “general AI,” which refers to computer systems that can outperform human abilities across a wide range of domains.Although generic AI isn’t yet available, we are beginning to understand the difficulties it will provide.

Over the past few years, narrow AI has advanced remarkably. Translation, chess and go, crucial biological research concerns like predicting how proteins fold, and image generation are among areas where AI systems have significantly advanced. What appears in a Google search or on your Facebook Newsfeed is decided by AI systems. They write songs and essays that appear to have been written by humans at first glance. They take part in strategy games. They are being created to enhance missile detection and drone targeting.

But restricted AI is becoming less restricted. By methodically teaching computer systems particular concepts, we once advanced AI. Researchers developed methods for detecting edges in order to do computer vision, which enables a computer to recognise objects in images and videos. They included heuristics related to playing chess in the programming. They drew from linguistics to perform natural language processing (speech recognition, transcription, translation, etc.).

But in recent years, we’ve become more adept at building computers with general learning capabilities. We let the computer system figure out the specific details of a problem on its own rather than mathematically stating them. Previously, we approached computer vision as a completely distinct issue from natural language processing or platform game play, but today, we are able to address all three issues using the same methods.

Our current AI development has allowed for significant advancements but has also immediately brought up ethical concerns. When you use inputs from a criminal justice system that is biassed against black people and low-income people to train a computer system to predict which convicted felons will reoffend, its outputs are likely to be prejudiced as well. Making websites more addictive can increase revenue but hurt user experience. By releasing a programme that generates plausible phoney reviews or fake news, the spread of the truth may be hampered.

In other words, while the systems are excellent at attaining the objective they were taught to seek, the goal they learnt in their training environment isn’t the one we truly desired, which leads to our troubles. Additionally, we are creating complex systems that make it difficult for us to predict their behaviour.

The harm is currently small because to the limits of the existing systems. However, as AI systems advance, this tendency may come to affect individuals in much more harmful ways.

Can a machine ever have intelligence equal to that of a human ?

AI

Yes, but today’s AI systems aren’t nearly that sophisticated.

The saying “everything that’s easy is hard, and everything that’s hard is easy” is a well-known one when it comes to AI. completing intricate computations in a split second? Easy. determining whether an image is of a dog by simply looking at it? Hard (until quite recently) (until very recently).

Numerous human activities are currently beyond the capabilities of AI. For instance, it’s challenging to create an AI system that explores a foreign setting and can find its way, say, from a building’s entrance up some stairs to a certain person’s workstation. We are still learning the best ways to create an AI system that can read a book and retain the concept.

The deep learning paradigm has recently led to many of the most significant developments in AI. Deep learning algorithms can perform amazing things, including winning games that we once thought humans could never lose, taking beautiful and lifelike images, and solving difficult molecular biology problems.

Some experts believe these developments signal a need to consider the risks of more potent systems, but detractors persist. Pessimists in the field claim that programmes still need a sizable pool of structured data to learn from, that they need carefully selected parameters, or that they can only be used in settings that are intended to prevent issues that we don’t yet know how to handle.They cite self-driving cars, which, despite the billions spent on developing them, are still only fair under ideal circumstances.

Although recent advances in computer speed have slowed, it is still believed that the cost of processing power is decreasing by a factor of 10 every ten years. AI has historically had less access to computational power than the human brain. That is altering. According to most projections, the time will soon come when AI systems will be able to access the same computational power that humans do.

In other words, when winning in chess required wholly different strategies than winning at go, we didn’t have to worry about universal AI. However, the same methodology now generates bogus news or music based on the training data that is given into it. And as far as we can tell, when given more computing time, the programmes just keep growing better at what they do – we haven’t found a ceiling on how excellent they can get. When deep learning was initially developed, its methods to the majority of problems outperformed all others.

When did scientists start to worry about the risk posed by AI ?

Younger experts began raising similar concerns in the 21st century as computers fast become a revolutionary force in our environment.

Professor Nick Bostrom is the head of the Future of Humanity Institute and the Governance of Artificial Intelligence Program at the University of Oxford. In his studies of human risks, he considers both general issues like why humans appear to be alone in the cosmos and more specific issues like the potential dangers of the technological advancements that are being considered. He said that AI puts humans in risk.

In 2014, he published a book outlining the dangers AI poses and the importance of doing it right the first time. He came to the conclusion that once an antagonistic superintelligence existed, it would make it impossible for humans to replace it or alter its preferences. That would be the end of us.

Others have come to the same conclusion all across the world. Eliezer Yudkowsky, the founder and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), a group that aims to provide stronger formal characterizations of the AI safety problem, and Bostrom collaborated on a study on the ethics of artificial intelligence.

The majority of Yudkowsky’s career in AI has been spent trying to convince his peers that AI systems will, by default, be out of sync with human values (not necessarily opposed to but indifferent to human morality), and that doing anything to stop that will be a difficult technical problem. Yudkowsky began his career in AI by worriedly picking holes in others’ proposals for how to make AI systems safe.

Many experts fear that by overhyping their field, others may doom it when the gimmick wears off. This debate, however, shouldn’t hide a rising consensus; these are alternatives worth considering, funding, and investigating so that we have policies in place for when they are required.

What are we doing now to prevent the end of the world due to AI ?

A article summarising the state of the topic in 2018 stated, “It may be said that public policy on AGI [artificial general intelligence] does not exist.”

Technical work on promising ideas is being done, but surprisingly little is being done in the way of public-private partnerships, international cooperation, or policy planning. It is estimated that only 50 people worldwide work full-time on technical AI safety, and the majority of the effort is being done by a small number of organisations.

A research agenda for AI governance has been released by Bostrom’s Future of Humanity Institute. It focuses on “devising global norms, regulations, and institutions to best secure the beneficial development and deployment of sophisticated AI.” It has authored studies on the dangers of AI misuse, the background of China’s AI agenda, and the relationship between AI and global security.

The Machine Intelligence Research Institute (MIRI), which focuses research into creating highly reliable agents—artificial intelligence systems whose behaviour we can predict accurately enough to be certain they are safe—is the longest-running institution focused on technical AI safety. (Full disclosure: MIRI is a nonprofit organisation to whom I gave from 2017 to 2019.)

Less than three years old, OpenAI is a fairly young company that was formed by Elon Musk. However, there are active researchers who are involved in both the safety and capability study of AI. Researchers have since developed various strategies for secure AI systems. A research agenda from 2016 outlined “real unresolved technological concerns linked to accident avoidance in machine learning systems.”

A safety team and a technical research goal are presented here by Alphabet’s DeepMind, a leader in this field. It outlines an approach with an emphasis on specification (detailing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing). “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe,” it concludes.

The robustness of contemporary machine-learning algorithms to minor alterations, algorithmic bias, and the transparency and interpretability of neural nets are just a few examples of the many people working on current AI ethics issues. Some of that studies might be useful for averting harmful situations.

Overall, however, it appears that most climate change researchers are concentrating on preventing the droughts, wildfires, and famines that are already happening. Only a small skeleton team is dedicated to making future predictions, and only about 50 researchers work full-time on developing solutions.

Not every company with a sizable AI department has a safety team at all, and some of them are just concerned with algorithmic fairness and not with the dangers posed by cutting-edge algorithms. There is no AI department in the US government.

There are still numerous unanswered questions in the subject that, depending on how they are answered, may make AI appear much scarier or much less so.

How concerned we should be ?

AI safety is competing with other goals that, well, sound a little less sci-fi, according to others who believe that fretting is premature and the threats are exaggerated, and it’s unclear why AI should take precedence. People who believe the concerns mentioned are significant and real find it absurd that we’re devoting so little time and money to addressing them.

While machine learning researchers are right to be sceptical of hype, it’s also difficult to ignore the reality that they’re utilising very general techniques to accomplish some great, surprising things, and that it doesn’t appear that all the low-hanging fruit has been harvested.

AI appears to be a technology that, when it is developed, will fundamentally alter society. It will be like launching a rocket, according to researchers from numerous significant AI groups; we must get it right before pressing the “go” button. So it seems imperative to start studying rocketry. Regardless of whether mankind should be afraid, we should unquestionably be doing our research.