Categories
Artificial Intelligence

Artificial Intelligence : A Detailed Study-2022

Mathematician Alan Turing changed history once more with a straightforward query: “Can machines think ?” Less than ten years after assisting the Allies in winning World War II by cracking the Nazi encryption device Enigma.

Turing’s 1950 article “Computing Machinery and Intelligence” and the accompanying Turing Test established the fundamental goal and vision of AI.

Fundamentally, the field of computer science known as artificial intelligence (AI) aims to successfully address Turing’s challenge. This endeavour aims to mimic or duplicate human intelligence in machines. The broad goal of AI has generated a lot of debate and interest. In reality, there is no universally accepted definition of the field.

Defining AI

Artificial Intelligence

The biggest problem with merely “developing intelligent machines” as an AI goal is that it doesn’t define AI or describe what an intelligent machine is. The interdisciplinary science of artificial intelligence (AI) is approached from many different angles, but developments in machine learning and deep learning are driving a paradigm shift in practically every sector of the computer industry.

A 2019 research study titled “On the Measure of Intelligence” is one example of a new test that has been suggested recently and has received generally positive reviews. In the article, François Chollet, a seasoned expert in deep learning and a Google employee, makes the claim that intelligence is defined as the “pace at which a learner transforms their existing knowledge and experience into new skills at worthwhile activities that include uncertainty and adaptation.” In other words, the most intelligent algorithms are able to predict what will happen in a variety of situations with only a tiny quantity of experience.

In contrast, Stuart Russell and Peter Norvig address the idea of AI by organising their work around the theme of intelligent agents in machines in their book Artificial Intelligence: A Modern Approach. In this light, Artificial Intelligence (AI) is defined as “the study of agents that acquire perceptions from the environment and perform actions.”

FOUR TYPES OF APPROACHES ARE DEFINED AS ARTIFICIAL INTELLIGENCE

(i) Thinking like a human being means modelling thought after the human mind.

(ii) Rational thinking is the imitation of logical cognition.

(iii) Being humane means acting in a way that resembles human conduct.

(iv) Rational behaviour refers to behaviour that is intended to accomplish a specific objective.

The first two ideas deal with how people think and rationalise, whereas the remaining ideas are concerned with how people act.According to Norvig and Russell, “all the skills needed for the Turing Test also allow an agent to act rationally.” They place special emphasis on rational agents that act to achieve the greatest results.

“Algorithms enabled by restrictions, exposed by representations that support models focused at loops that tie thought, perception, and action together,” is how Patrick Winston, a former MIT professor of AI and computer science, characterised AI.

Although these concepts may seem esoteric to the average person, they assist to focus the discipline as a branch of computer science and offer a guide for incorporating ML and other branches of AI into programmes and machines.

The Four Categories of Machine Intelligence

Based on the kinds and levels of difficulty of the tasks a system is capable of performing, AI can be categorised into four categories. Automated spam filtering, for instance, belongs to the most fundamental category of artificial intelligence, while the distant possibility of creating robots that can understand human emotions and thoughts belongs to a completely separate subcategory of AI.

1.Reactive Machines

The most fundamental AI principles are followed by a reactive computer, which, as its name suggests, can only use its intellect to see and respond to the environment in front of it. A reactive machine cannot utilise past experiences to inform current decisions since it lacks memory. Because they can only experience the world right away, reactive machines can only carry out a limited number of highly specialised jobs.

However, intentionally limiting the scope of a reactive machine’s worldview means that this kind of AI will be more dependable and trustworthy – it will respond consistently to the same stimuli.

The chess-playing supercomputer Deep Blue, which was created by IBM in the 1990s and defeated Gary Kasparov in a game, is a well-known example of a reactive machine. Deep Blue was only able to recognise the chess pieces on a board, know how each moves according to the game’s rules, acknowledge each piece’s current position, and decide what would be the most logical move at that precise moment. The machine wasn’t striving to better place its own pieces or anticipate prospective movements from the other player. Every turn was perceived as existing independently of any earlier movements and as having its own reality.

Google’s AlphaGo is another illustration of a reactive machine that plays games. Due to its inability to predict moves in the future and reliance on its own neural network to analyse game developments in the present, AlphaGo has an advantage over Deep Blue in more difficult games. In 2016, champion Go player Lee Sedol was defeated by AlphaGo, which has already defeated other top-tier opponents in the game.

Reactive machine AI can achieve a level of complexity and offer dependability when developed to carry out recurring tasks, despite its constrained scope and difficulty in modification.

2. Limited Memory

When gathering information and assessing options, limited memory AI has the capacity to store earlier facts and forecasts, effectively looking back in time for hints on what might happen next. Reactive machines lack the complexity and potential that limited memory AI offers.

 An AI environment is developed so that models can be automatically taught and refreshed, or AI is created when a team continuously teaches a model in how to understand and use new data.

The following six actions must be taken when using ML with restricted memory AI: 1.Training data must be created 2.The ML model must be developed,3. be able to generate predictions, 4.be able to accept feedback from humans or the environment, 5.be able to store that feedback as data, and 6.all of these stages must be repeated in a cycle.

3.Theory of Mind

It is only speculative to have a theory of mind. The technological and scientific advancements required to reach this advanced level of AI have not yet been attained.

The idea is founded on the psychological knowledge that one’s own behaviour is influenced by the thoughts and feelings of other living creatures. This would suggest that AI computers would be able to reflect on and decide for themselves how people, animals, and other machines feel and make decisions. Robots ultimately need to be able to understand and interpret the concept of “mind,” the fluctuations of emotions in decision-making, and a litany of other psychological concepts in real time in order to establish two-way communication between people and AI.

4. Self Awareness

The final stage of AI development will be for it to become self-aware after theory of mind has been created, which will likely take a very long time. As conscious as a person, this kind of AI is aware of both its own presence and the presence and emotional states of others in addition to its own. It would be able to comprehend what other people could need based on both what they say to them and how they say it.

AI self-awareness depends on human researchers being able to comprehend the basis of consciousness and then figure out how to reproduce it in machines.

How is AI used ? 

Virtual AI

DataRobot CEO Jeremy Achin gave the following definition of how AI is used now in his lecture to a crowd at the Japan AI Experience in 2017.

“AI is the ability of a computer system to carry out operations that often require human intelligence… These artificial intelligence systems are frequently powered by machine learning, occasionally by deep learning, and occasionally by really dull stuff like rules.

Based on its capabilities, artificial intelligence can be categorised in three different ways. These are stages through which artificial intelligence (AI) can develop rather than different varieties, and only one of them is currently feasible.

1.Narrow AI, sometimes known as “weak AI,” is a replica of human intellect that only operates in specific contexts. Even while these machines may appear clever, they are functioning under many more restrictions and limits than even the most primitive human intelligence. Narrow AI is frequently focused on executing a single task exceptionally well.

2. Artificial General intelligence (AGI) AGI, often known as “strong AI,” is the type of artificial intelligence (AI) that we see in movies, such as the machines in Westworld or Data in Star Trek: The Next Generation. A machine with general intelligence, or AGI, can use its intelligence to solve any problem, much like a human being.

3.Superintelligence : This will probably mark the apex of AI development. Superintelligent AI will be able to not only mimic but also outperform human intelligence and complex emotion. This could entail forming its own opinions and conclusions, as well as its own ideologies.

Advantages and Disadvantages of Artificial Intelligence

Although AI is undoubtedly seen as a valuable and rapidly developing asset, this young area is not without its drawbacks.

In 2021, the Pew Research Center polled 10,260 Americans about their views on AI. According to the findings, 37% of respondents are more concerned than excited, while 45% of respondents are both excited and concerned. Furthermore, more than 40% of respondents said they believed driverless automobiles will be detrimental to society. Even still, more respondents to the survey (almost 40%) thought it was a good idea to use AI to track the spread of incorrect information on social media.

AI is a blessing for increasing efficiency and productivity while also lowering the possibility of human error. However, there are some drawbacks as well, such as the expense of development and the potential for robots to take over human occupations. It’s important to remember, though, that the artificial intelligence sector has the potential to provide a variety of occupations, some of which haven’t even been imagined yet.

Importance of Artificial Intelligence

AI has a variety of applications, including speeding up vaccine research and automating fraud detection.

According to CB Insights, 2021 witnessed a record-breaking year for AI private market activity, with global funding rising 108% from the previous year. Due to its quick acceptance, artificial intelligence (AI) is creating a stir in a number of businesses.

Business Insider Intelligence found that more than half of financial services companies now use AI technologies for risk management and revenue generation in its 2022 research on AI in banking. The application of AI in banking could result in savings of up to $400 billion.

According to a 2021 World Health Organization study on medicine, despite challenges, integrating AI in the healthcare sector “has tremendous potential” since it might lead to benefits like better health policy and more accurate patient diagnosis.

AI has also impacted the entertainment industry. According to Grand View Research, the global market for AI in media and entertainment would increase from a value of $10.87 billion in 2021 to $99.48 billion by 2030. In that extension, AI applications like detecting plagiarism and creating high-definition visuals are included.

Also Read ,

1.DALL·E Mini : A Text to Image Converter AI Tool – 2022

2.How To Become Blockchain Developer  ?-2022

3.All You Need to Know about NFTs-2022

4.Crypto Lending – An Easy Source Of Passive Income-2022

Categories
Web 3.0

What is “Web 3.0 – The Future of Internet” ?-2022

The newest version of the Web is known as Web3, or Web 3.0. Web 2.0 is the current version of the Internet, and social media platforms and centralization have taken over. The majority of people are often concentrated on regulated social media platforms. Most of their data is kept in cloud storage facilities and on centralised data servers. On centralised web servers and centralised cloud servers, the web applications are hosted. Let’s take a closer look at Web 3.0 in this essay and the key technologies that will shape it.

The Semantic Web is what Tim Barners-Lee named it. The phrase “Web3” was first used in 2014 by Ethereum co-founder Gavin Wood, who sees decentralised technologies as the Web’s future. Elon Musk recently questioned whether anyone had seen web3 on Twitter. It’s between a and z, Jack Dorsey, the creator of Twitter, retorted.

No matter what you call the next generation of the Web—Web3 or otherwise—almost it’s here. The purpose of this article is to explain Web3 and the technologies that make it up. However, before doing that, it is important to know what Web 1.0 and Web 2.0 are.

Web 3.0

Web 1.0

1989 – 2005

The World Wide Web (W3 or WWW), often known as the Web, is a network of interconnected computer systems that employs URLs (Uniform Resource Locators) to access and transfer digital content using Web browsers. The network is connected via the Hypertext Transfer Protocol (HTTP) protocol. Web applications, which are hosted and run on a Web Server, are used to host and manage the content (web pages, files, photos, videos, and other documents).

Tim Berners-Lee, an English scientist, and Robert Cailliau, a co-inventor, developed the World Wide Web in 1989. Tim created the first web browser in 1990 while working as a contractor at CERN near Geneva, Switzerland. It was made available to the public in 1991.

Web 1.0 was the first version of the Web, and it was mostly made up of static web pages that were linked to and contained HTML-based content. Users used the majority of the content that was written and published on the Web servers to publish and share information with others. The user interfaces were static and unresponsive since the web pages’ content was incorporated directly into the html pages. In this time period, desktop computers were mostly used to access the Internet.

Web 2.0

2005 – present

Web 2.0, commonly referred to as the dynamic web, can be compared to the dynamic web before more companies began using the Internet. Data grew more dynamic, and backend databases started to develop and be used. The idea of centralised servers developed, and cloud computing finally took control. Today, this still occurs. Almost every organisation is moving its data and apps to public, private, and hybrid clouds at this time since we are in the era of clouds.

In addition, HTML 5, JavaScript, and CSS were introduced in Web 2.0, making the web accessible on any device, anywhere, and of any size. The development of Web 2.0 also includes front end technologies like Angular, React, and other hybrid and native mobile platforms.

The current, responsive web of today is compatible with all types of web-enabled devices, including computers, servers, tablets, smartphones, IoT, and several more smart gadgets like smart homes and cars.

The Social Web, often known as Web 2.0, enhanced the social and interactive aspects of the Internet. Apps like Facebook, Instagram, and Twitter allow users to interact and converse with people all over the world in addition to consuming content.

Video streaming, interactive photographs and graphics, and dynamic video material that is presented based on the user’s interests and choices are all part of the Web 2.0 era. Nearly 4 billion people worldwide use sites like YouTube, Netflix, and many others to watch videos.

Cloud computing was not the only technology introduced by Web 2.0; there was also serverless, AI, ML, microservices, containers, APIs, interoperability, speech enabled systems, voice apps, and many others.

Front end technologies like WebAssembly, ReactNative, and several others are still being developed as part of the ongoing evolution of Web 2.0.

Web 3.0

Gavin Wood, a co-founder of Ethereum, first used the term Web3 in 2014. The main idea behind Web3 is the use of decentralised blockchain-based platforms to offer consumers ownership over their data and store it on a blockchain. However, in my opinion, Web 3.0 will encompass much more than just blockchain.

The Web 3.0 age is approaching doors for its own reasons, even as Web 2.0 continues to prosper.

The Web 2.0 period is its golden age, but it has also brought with it numerous issues and challenges. Let’s look at a few of these challenges.

(i) Data Trust, transparency, privacy, and privacy centralization of data management

(ii) Centralized Power

(iii) The majority of data in Web 2.0 is kept on centralised servers and open clouds. Data became more susceptible to fraud, cyberattacks, and other errors as a result.

(iv)  Management of personal data

Key Features of Web 3.0

1. Semantic Web

The Semantic Web, written by Sir Berners-Lee for Scientific American in May 2001, is a crucial component of Web 3.0. (Berners-Lee et al.) The Semantic Web, according to this source, “is an expansion of the current web in which information is given well-defined meaning, improving the cooperation of computers and people.” Here is a drawing that clarifies what the phrase “semantic web” means.

Data is stored everywhere in the Web 2.0 era, and numerous methods are being developed to make sense of the data. Data will be kept in the Web3 concept as information (meaningful data), making it simple to comprehend and work with both people and machines.

2. Ubiquility

Omnipresence, often known as ubiquity, refers to being present everywhere. The systems are supposed to be accessible from anyone and everywhere in the Web3 idea. With the aid of technologies like decentralisation, edge computing, offline accessibility, and other technologies, this is an expansion of existing software systems.

3. AI & Machine Learning

An additional crucial component of Web3 is AI and machine learning. It is the logical career for modern systems, where automation is expanding with the aid of AI and ML.

4.Decentralised Networks

Blockchain-based decentralised networks are expanding incredibly. Systems that are distributed and decentralised do not rely on a centralised authority or storage. The network is controlled by operator nodes, which can be located anywhere in the world and run on a peer-to-peer protocol.

Web 3.0 Applications

1.Wolfram Alpha

Wolfram Alpha wants to be a top Web 3.0 application by 2022. It is a Wolfram Research product that provides a computational knowledge engine to aid in the visualisation of the data acquired from internet databases. Instead than listing websites like a search engine, it is utilised to provide direct answers. It can provide you with better information in less time than even Google Search Engine because to the inventiveness of the expertise.

2. Siri ,Google Assistant and Alexa

The semantic web is used by the voice assistants from the top three tech companies in the world: Google Assistant, Alexa, and Siri. Users using this programme can now perform tasks they were before unable to perform thanks to voice recognition and natural language processing. These assistants can now respond to a wide range of queries from their users.

3. Flickr

Flickr is a website for photography and photo sharing that enables users to find, create, post, and share photographs with people they care about. Flickr has one of the largest public databases with billions of photographs organised into thousands of categories and over 17 million active visitors each month.

4. Facebook/ Meta

It would be the most populous country in the world if it were a country. Facebook and Instagram, the two most popular social networking sites from the Meta, are having a daily impact on users’ lives and are expanding their reach tremendously. With the use of Web 3.0 technologies, users discover and establish new communities and connections. Customer involvement and engagement are further increased via apps created around the Facebook ecosystem.

Conclusion

Web 3 is here to stay, and there’s no denying that our educational system needs to change quickly to keep up with students’ changing needs and the skills they’ll need to succeed in the future. But as our educational system changes to keep up with the Web 3, we are confronted with challenging open questions about security, privacy, and addiction as we start to adopt real-world Ed 3 solutions. It is imperative to avoid falling into the myth that technology will be the panacea for all of our current educational problems; it never has been and never will be.

Hopefully, a smarter web and a more individualised web browsing experience will lead to a more equitable internet. The most important aspect of Web 3.0 will be user empowerment since they will have control over their data.

More sectors will be impacted by AI, ML, IoT, and associated technologies as the momentum behind dApps (decentralised applications) and DeFi (decentralised finance) grows.

Please Visit :

1. Role of Blockchain in Global Healthcare System-2022

2. Blockchain-Based Voting System : 2022

to know more about Blockchain applications .