The underlying learning architecture for ChatGPT and other Large Language Models (LLMs) is the transformer architecture, first decscribed in a paper by Google engingeers in 2017.

Shortly after its first application in language, we used the idea to build the Beethoven AI that finished the sketches of his 10th symphony.

Language and music are sequence-based domains of knowledge and these work particularly well with transformers.

They came first. But whatever sequence-based domain you can think of, someone is already building a transformer-based AI system for it:

What other domains can you think of? What is next?

Language and Music Came First. What is Next?

Any sequenced-based domain of human knowledge might be a good field for the transformer architecture in AI. What can we except to see?