top of page

AI ARTIFICIAL INTELLIGENCE

What is artificial intelligence?

OpenAI’s ChatGPT

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, today, AI is already being used to automate many IT processes, including data entry, fraud detection, customer service, and predictive maintenance and security.  Let us help you evaluate and implement such an agent or a process in your business.

A Definition: It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, however, AI does not limit itself to methods that are biologically related.

Over the years (and from Alan Turing to DeepMind's AlphaGo), artificial intelligence has gone through many cycles, but even to skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. 

The applications for this technology are growing every day, and we’re just starting to explore the possibilities. But as the hype around the use of AI in business takes off, conversations around ethics become critically important. 

 

Irrespectively of the types of artificial intelligence—weak AI vs. strong AI - early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s possible. The future is models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning.

There are numerous, real-world applications of AI systems today. Below are some of the most common use cases:

  • Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability that uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search —e.g. Siri— or provide more accessibility around texting. 

  • Customer service:  Online virtual agents are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.

  • Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.  

  • Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.

  • Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

 

History of artificial intelligence: Key dates and names

The idea of 'a machine that thinks' dates back to Ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi's ENIGMA code during WWII—proposes to answer the question 'Can machines think?' and introduces the Turing Test to determine if a computer can demonstrate the same intelligence (or the results of the same intelligence) as a human. The value of the Turing test has been debated ever since.

  • 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.

  • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that 'learned' through trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.

  • 1980s: Neural networks which use a backpropagation algorithm to train themselves become widely used in AI applications.

  • 1997:  Deep Blue beats then-world chess champion Garry Kasparov, in a chess match (and rematch).

  • 2011: Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!

  • 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.

  • 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.

  • 2023: A rise in large language models, or LLMs, such as ChatGPT, create an enormous change in the performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pre-trained on
    vast amounts of raw, unlabeled data.

bottom of page