The generative AI world continues to develop at a very fast pace. I’m buried in project work right now, but wanted to quickly note some developments.
The big announcement today was OpenAI’s launch of GPT-4, the next version of its large learning model. 4 draws on a larger training set and can also write responses to user-uploaded images.
Note how OpenAI uses higher education tests to demonstrate the software’s improvement, version by version:
Currently it’s only available to ChatGPT+ customers. I became one for this purpose, and asked the bot a question: “How are you different from ChatGPT’s previous version?”
As an AI language model, I am a constantly evolving and improving entity based on the advancements in natural language processing and machine learning. Although I’m presented as ChatGPT, I am based on the GPT-4 architecture, which is an advanced version of the previous GPT-3 model. While specific differences may not be mentioned explicitly, the improvements between the versions typically include:
- Larger training dataset: As newer versions are developed, they usually benefit from an increased amount of data used in the training process. This enables me to have a broader understanding of various topics and generate more accurate and contextually relevant responses.
- Improved language understanding: With advancements in the underlying architecture, I can better understand and process complex sentences, nuances, and contextual information. This allows me to generate more coherent and contextually accurate responses.
- Enhanced conversational skills: As the model evolves, my ability to maintain longer and more meaningful conversations improves. This includes better context retention and the capacity to address more complex queries effectively.
- Reduced bias and harmful outputs: The development process aims to minimize the biases and harmful outputs that might arise from the model. This involves refining the training data and improving the fine-tuning process to create a more responsible and safe AI.
Please note that while I am an improvement over previous models, I am still not perfect. There may be instances where I generate incorrect or nonsensical answers. If you encounter such issues, please feel free to correct me or ask for clarification.
I’m exploring it.
Meanwhile, a bunch of educational projects using the ChatGPT API have rolled out. Khan Academy announced Khanmigo, a putative tutor:
At almost the same time Grammarly announced GrammarlyGo, a ChatGPT-powered tool to help you write. It’s somewhere between a tutor and coauthor:
What other educational applications have you seen which use ChatGPT’s API?
I’ve been using the Bing chat for a while and have found it very useful, but these large language models do raise more and more ethical questions. I recommend this article if you’ve not read it yet: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Yes, the parrot model helps frame things.
Chat GPT is like wearing 7-league boots, just a few strides and you are propelled light-years ahead.
It is a personal tutor that answers with infinite patience. Certainly the potential for massive leaps in learning is there — but, it is also de-coupled from education institutions — i.e., the social institutions and credentialing processes that we call schools, colleges, universities, that give it meaning.
And what about motivation to complete exercises, rather than just learning on your own. Without it, without a meaningful context, it may not happen, even though it could, theoretically. That’s always been a problem with schooling — balancing the two sides of the equation.
Will students exposed to AI ever EVER be able to decide on a major? I think not. The stock of knowledge is simply too vast to fit into artificial containers that we now call “disciplines.” The whole idea behind departments is fraying at the edges. With AI, all knowledge is strangely homogenized — this is easy to see, leaving it to the user or corporation to “brand” streams of knowledge and learning in their own way.
But it obliterates earlier distinctions that were culturally constructed centuries ago.
“Education” was never the same thing as learning.
Education is oriented toward social institutions modeled on factory workspaces now called classrooms. With AI, all these pretensions are fading even more quickly.
I see Chat GPT and even Khan Academy as fragmenting further an already shaky education sector, decoupling the various components as they drift haplessly away in space like the expanding universe.
Even worse, for skills assessment to be legitimate, the process relies on the veracity of assessment results, now called into question by Chat GPT. ‘Old school’ becomes even more impossible.
This is where the advent of powerful AI tools threatens to undermine the delicate balancing act in the classroom: AI is disruptive, problematizing the creation of a learning environment that promotes intrinsic motivation rather than being driven by external factors such as grades or rewards.
Rather than giving students more autonomy and control over their learning, allowing them to pursue their interests and passions, and providing opportunities for self-directed learning, AI threatens to send them drifting off into space, far above school-structures, assessment, subject headings.
Excellent thoughts, Glen.
As with MOOCs and random (informal) internet content, there isn’t a serious structure to keep the non-passionate going for a sustained time.
Pingback: Sharing Diigo Links and Resources (weekly) | Another EducatorAl Blog