What is Ai?
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to intelligence of humans and other animals. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.
Some examples of Ai
AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo and Tesla), generative or creative tools (ChatGPT and AI art), automated decision-making, and competing at the highest level in strategic game systems (such as chess and Go).
How Ai affects regular life?
As machines become increasingly capable, tasks considered to require “intelligence” are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.
Ai and its journey
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an “AI winter”), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field’s long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques. The techniques include search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.
Philosophical aspects
The field was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence. These issues have previously been explored by myth, fiction, and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. The term artificial intelligence has also been criticized for over hyping AI’s true technological capabilities.
Artificial Intelligence – From beginning to now
Artificial Intelligence is much older than you would imagine. Even there are the myths of Mechanical men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of AI which defines the journey from the AI generation to till date development.
Maturation of Artificial Intelligence (1943-1952)
- Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
- Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning.
- Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan Turing publishes “Computing Machinery and Intelligence”, in which he proposed a test. The test can check the machine’s ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
- Year 1955: Allen Newell and Herbert A. Simon created the “first artificial intelligence program” Which was named as “Logic Theorist”. This program had proved 38 of 52 Mathematics theorems. And found new and more elegant proofs for some theorems.
- Year 1956: The word “Artificial Intelligence” first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. That was the first time, AI coined as an academic field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
- Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
- Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
The first AI winter (1974-1980)
- The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer scientist dealt with a severe shortage of funding from government for AI researches.
- During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
- Year 1980: After AI winter duration, AI came back with “Expert System”. Expert systems were programmed that emulate the decision-making ability of a human expert.
- In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University.
The second AI winter (1987-1993)
- The duration between the years 1987 to 1993 was the second AI Winter duration.
- Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
- Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first computer to beat a world chess champion.
- Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
- Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
- Year 2011: In the year 2011, IBM’s Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.
- Year 2012: Google has launched an Android app feature “Google now”, which was able to provide information to the user as a prediction.
- Year 2014: In the year 2014, Chatbot “Eugene Goostman” won a competition in the infamous “Turing test.”
- Year 2018: The “Project Debater” from IBM debated on complex topics with two master debaters and also performed extremely well.
Google has demonstrated an AI program “Duplex” which was a virtual assistant and which had taken hairdresser appointment on call, and lady on other side didn’t notice that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with high intelligence
AI services like ChatGPT and Google Bard has took the market by storm. Images generated by Ai art services like midjourney, bluewillow and similar are all over social media. However that might be topic for another article.