Artificial intelligence (AI), the capacity to execute activities typically associated with intelligent people on a digital computer or computer-driven robot The word is commonly used for the development of human-characteristic systems, such as the ability to reason, to seek meaning, to generalise or to learn from prior experiences.
Since the introduction of the digital computer in the 1940s, computers have been shown to do highly difficult jobs – such as finding evidence on mathematical theorems or playing chess – with considerable expertise.

What is artificial intelligence (AI)?

Back in the 1950s, the fathers of the field, Minsky and McCarthy (co-founder of the Artificial Intelligence Project), described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That’s obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system’s ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for,” he said.

“Intelligence does not skill itself; it’s not what you can do; it’s how well and how efficiently you can learn new things.”

It’s a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated ‘narrow AI’, the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the uses for AI?

AI is everywhere nowadays to recommend what is next online and what is on photography, recognise spam or spot spam credit-card fraud, to comprehend what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri.

Some of the different uses for AI include:

Big Data Research: Artificial intelligence can help us make sense of massive amounts of data, including unstructured data. AI has helped organizations find new insights that had been locked away in stored data. Hidden within the data lies the potential to develop amazing businesses and resolve some of the world’s largest challenges.

Customer Management Systems

Artificial intelligence is being used to alter customer relationship management systems. Some software systems, such as Zoho or Salesforce, require significant human maintenance to remain accurate. However, when an AI is applied to these platforms, they are transformed into auto-correcting, self-updating systems that efficiently store and manage data, without constant glitches.


The AOD (Air Operations Division) uses AI for training purposes. Artificial intelligence is currently being used as mission management aids, combat and training simulators, and support for tactical decisions. Aeroplane simulators use artificial intelligence to process data taken from simulated training flights, as well as simulated aircraft warfare.

Self-driving Automobiles

AI-driven cars and trucks are not yet an option. Nalin Gupta, the Director of Business Development at Ridecell, stated, “Safety is crucial when it comes to autonomous vehicles, and for the public to embrace AVs, they have to be safer compared to human-driven vehicles.” He was referring to the tragic accident in 2018 when an autonomous Uber killed a pedestrian, and the incident with Jeremy Banner, who died in 2019 while engaging the autopilot feature of his car.

In the Classroom

A promising innovation is the concept of a personalized AI tutor for each student. Because a single teacher cannot work simultaneously with every student, an AI tutor would help students to get extra help in areas where they need it.

Hospitals and Medicine

Artificial intelligence now helps people with diabetes to regulate their blood sugar. AI automates prescription refills and connects call centre customers with the person most qualified to answer their questions. As algorithmic advances, computing power, and data proliferation continue to evolve, the variety of opportunities continue to expand. They are predicted to include:

  • Design treatment plans
  • Big data research — mining medical records to provide more useful information
  • Companion robots to care for the elderly
  • Predicting the likelihood of death from surgical procedures
  • Heart sound analysis
  • Drug creation

Financial Trading

Several banks and proprietary trading firms currently have entire portfolios being managed by AI systems. Additionally, complex AI systems are used in “algorithmic trading.” They make trading decisions several times faster than humans are capable of, and can make millions of trades per day without human intervention. This is referred to as high-frequency trading and represents a fast-growing sector in financial trading. 


AI has been combined with a variety of sensor technologies supporting both smart cities and several manufacturing industries. Sensors are included in the IoT (the Internet of Things) and are used to collect data that the AI processes and uses for decisions. Sensors can be used to monitor such things as traffic flows when lighting is needed, problems with a conveyor belt, and even available parking.

Personal Finance

Products have been developed that use artificial intelligence to help people deal with their personal finances. Digit, for example, uses an app powered by an AI that automatically helps consumers optimize their spending habits based on personal goals and behaviours. The app analyzes monthly income, spending habits, and current balance. It then makes its own decision and may transfer money to the savings account.

What are the different types of AI?

At a very high level, artificial intelligence can be split into two broad types: 

Narrow AI

Narrow AI is what we see all around us in computers today — intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

General AI

General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience. 

This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today – and AI experts are fiercely divided over how soon it will become a reality.

What are recent points of reference in AI development?

While modern narrow AI may be limited to performing specific tasks, within their specialisms, these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

AlexNet’s performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore’s Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, with Google training a system to recognise an internet favourite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public’s attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that are searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Perhaps the most striking example of AI’s potential came late in 2020 when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system’s ability to look at a protein’s building blocks, known as amino acids, and derive that protein’s 3D structure could profoundly impact the rate at which diseases are understood, and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 determined the 3D structure of a protein with an accuracy rivalling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has been heralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

What are other types of AI?

Another area of AI research in evolutionary computation.

It borrows from Darwin’s theory of natural selection. It sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution. It could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

Which are the leading firms in AI?

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that have probably made the biggest impact on the public awareness of AI.

Which AI services are available?

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine-learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offering a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offerings designed to streamline the process of training up machine-learning models and recently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don’t want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile, IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella, and having invested $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Will your job be robbed by AI?

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. There are now 27 Amazon Go stores and cashier-free supermarkets where customers just take items from the shelves and walk out in the US. What this means for the more than three million people in the US who work as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 200 000 bots in its fulfilment centers, with plans to add more. But Amazon also stresses that as the number of bots has grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working on automating the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions, the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.

Yet, some of the easiest jobs to automate won’t even require robotics. At present, there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies as software gets better at automatically updating systems and flagging the important information, so the need for administrators will fall.

There’s a broad range of opinions about how quickly artificially intelligent systems will surpass human capabilities among AI experts.

Tags: , ,

Related Article