Editorial Summary :

The term originates from a workshop held in Dartmouth in the summer of 1956 . Early approaches to AI revolved around machine-readable knowledge . Early attempts were made to encode human knowledge in a knowledge store and use search & logic to make inferences . However, human knowledge is messy, ever-changing & full of contradictions and ambiguity . Over time, a more popular approach became to learn a model of the world from data . The current wave of AI is driven by this idea, and in particular by the success of ‘machine learning’ (ML) A language model might be trained on billions of words of text . But data scraped from the web is not always representative of real life . Wikipedia, for example, is an important source of text data for many machine learning models . A model trained on text collected before 2020 would know nothing of Covid-19 and all the changes it has brought to our lives . Despite these issues, machine learning is behind many of the AI products we use today . Models are successfully being trained on audio, text, images, video, and are being used daily by large numbers of people .

Key Highlights :

  • Early approaches to AI revolved around machine-readable knowledge .
  • This was tried to encode human knowledge in a knowledge store and use search & logic to make inferences .
  • Data scraped from the web is not always representative of real life .
  • Wikipedia is an important source of text data for many machine learning models .
  • Wikipedia, for example, is an . important source . of text .

The editorial is based on the content sourced from medium.com

Read the full article.

Similar Posts