Discover the key technical terms associated with generative AI and their meaning.
API
Artificial neural networks
- are computer programs or algorithms that simulate the behaviour of neurons in the brain. ANNs are a subset of machine learning and have enabled deep learning progress by enabling training at multi-layer neural network levels.
Read more:
Deep learning
– refers to a process of learning for computers. Computer models are exposed to large volumes data which is compared to expected outcomes to continuously refine their "thinking” and enable learning.
Read more:
Deepfake
– refers to an ultra-realistic video or audio of a person in which they are made to say or do something that is fake. Wider availability of amplified processing power and larger data sets means that creating a deepfake is remarkably straightforward.
Read more:
Foundation models
- are large machine learning models trained on large volumes of data at scale. These models are capable of a range of general tasks. OpenAI’s GPT is an example of a foundation model.
Read more:
Github
GitHub is a code hosting platform allowing developers to store and manage their code for version control and collaboration.
Read more:
GPT
LLMs
– refer to Large Language Models that have been trained on a large volume of text-based data. Sources of data usually differ depending on the LLM. The algorithms underpinning the LLMs are able to analyse the data and probabilities so when the model is given a prompt, for example in the form of a question, the model is able to generate an answer.
Read more:
Public model
- refers to publicly available AI models trained on a large volume of data from various sources. OpenAI’s GPT is an example of a public model. Providers of public models control how input information is stored and incorporated into future versions of the models.
Read more: