Connect with us

Tech

Why universities have been waiting for their Chat GPT moment

Published

on

The COVID-19 pandemic was a shock to higher education systems everywhere. But while some changes, like moving lectures online, were relatively easy to make, assessment posed a much bigger challenge. Assessment can take many forms, from essays to exams to experiments and more.

Many institutions and individual academics essentially outsourced the assessment process to software. They increased their use of programs like Turnitin to check for matched wording in students’ assignments. And for closed-book, timed tests they used tools such as Proctorio, which monitor a student’s computer or phone while they write exams.

But universities did not seize this chance to reflect on what higher education is for and how assessment might be used to enhance its achievement. Instead they doubled down on the status quo, breathing a sigh of relief once isolation and lockdown orders were revoked and things could return to “normal”.

The advent of ChatGPT and similar chatbots provides another opportunity for the sector to reflect on why and how it assesses – and what higher education is for.

ChatGPT is a chatbot technology, powered by artificial intelligence (AI), that enables users to have natural, human-like conversations with a computer. It uses advanced language processing techniques to understand user input and provide natural, contextual responses. With ChatGPT, users can converse with a computer in a way that feels like talking to a real person. It scrapes information from a large database mined from the internet and uses this to create a unique response to a prompt.

So, for instance, it can write an essay on any topic – “the advantages of breastfeeding” or “the social complexity of the refugee crisis in Europe”. It can also be trained to provide context-specific essays.

We are academics from South Africa, Australia, the UK and the US, working in fields related to education, ways of learning and teaching, and academic practice. We believe ChatGPT could be a powerful impetus to shift from understanding assessment as the assurance of an educational “product” to assessment as learning.

Used properly, it could be a valuable way to teach students about critical thinking, writing and the broader role of artificial intelligence tools like chatbots in the world today.

The advent of ChatGPT has prompted a variety of reactions from universities all over the world. In the UK, for instance, the reaction towards ChatGPT and higher education has veered from the hyperbolic – will AI ruin universities? – to the more measured, such as considering what students think of the technology.

If the purpose of higher education is that students memorise and summarise a body of knowledge, and that this is then certified via assessment, then ChatGPT is an existential threat. The market value of credentials is directly threatened if universities can no longer confidently assert that the texts assessed by academics have indeed been produced by their students.

But if the purpose of higher education is to nurture a transformative relationship to a particular body of knowledge that enables students to see the world – and their place in it – in new ways, then assessment takes on a vastly different meaning.

Used well, ChatGPT and similar tools can show students the wonders and responsibilities of acquiring and building powerful knowledge. It can assist rather than being seen in opposition to their learning.

Here are four ways this might happen.

1. Students can reflect on articles produced by ChatGPT which have fabricated references and distorted information and then deliberate on the potential consequences of this in an era of fake news.

2. Students can be set assignments that require them to compare ChatGPT’s answers to ones they have developed and ascertain whether they know the material and how it might be represented differently.

3. ChatGPT can be used to support essay writing and to help foster a sense of mastery and autonomy. Students can analyse ChatGPT responses to note how the software has drawn from multiple sources and to identify flaws in the ChatGPT responses which would need their attention.

4. Students can be encouraged to consider the extent to which their use of ChatGPT has enabled or constrained their access to powerful knowledge. This is a chance to critically reflect on where and how the use of AI is taking place in society and their potential future professions.

There is already a multitude of ideas available online about how ChatGPT can be used to create prompts for assignments. Lecturers and students can explore these to see how they might be adapted for their own learning and teaching needs.

None of these ideas will be simple to implement. Academics will need support from their institutions in considering what such technological developments mean for their disciplines. And, we’d argue, that support must help academics to move beyond seeking ways to trick the software or to monitor students.

Society and the higher education sector squandered the opportunity that COVID presented to reflect on what higher education was for and how assessment might be used to enhance learning.

Rather than signalling the end of higher education, ChatGPT has instead presented the sector, and society more broadly, with another opportunity. This is a chance to develop innovative and inclusive teaching, learning, and assessment aligned to such understandings.

Continue Reading

Tech

Elon Musk and experts call for six-month pause on A.I.

Published

on

The Future of Life Institute fears there may be potential risks to society

Elon Musk and a group of leading A.I. experts are calling for a six-month pause on developing systems, more powerful than OpenAI latest version of GPT-4.

The Future of Life Institute fears there may be potential risks to society.

In an open letter signed by some of the biggest and influential minds in tech, the Institute wants the pause so frameworks can be constructed to better handle A.I.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the open letter said.

British computer scientist Stewart Russell is a signatory to the open letter, and he explains what is occurring in the sector that scares him.

“With what is gestating in computer and research labs, is for general purpose A.I,” Russell declared recently. “A.I. that can do anything that the human mind can be turned to.

“Because of the enormous advantages machines have over humans, I expect general purpose A.I. will far exceed human capabilities in almost every dimension.”

Continue Reading

Business

Alibaba shares soar as company breaks into parts

Published

on

Alibaba shares have soared as company executives announce a business shake-up

 
It’s been a good day for investors in Chinese tech giant Alibaba.

Shares in the company soared as executives announced a plan to break the business into parts.

Alibaba’s commerce leader says he will split the $220 billion empire into six individual units.

The major restructuring is the company’s biggest in 24 years.

Alibaba shares gained more than 14 per cent in New York and were up 13 per cent in Hong Kong.

The move follows reports Alibaba founder Jack Ma resurfaced in China this week after a long absence.

The units will have their own chief executives and boards of directors.

They will be allowed to raise capital and seek stock market listings.

Alibaba says the units will “capture opportunities in their respective markets and industries, thereby unlocking the value of Alibaba Group’s respective businesses”.

“The market is the best litmus test, and each business group and company can pursue independent fundraising and IPOs when they are ready,” says chief executive Daniel Zhang. #trending #featured

Continue Reading

Tech

Facial recognition has been used a million times by U.S. police

Published

on

Controversial facial recognition has been used a million times by police to help track criminals

As facial recognition becomes more prominent, the founder of tech firm Clearview says his company has run nearly a million searches for U.S. police.

It’s also been revealed the company has scraped 30 billion images from platforms such as Facebook and Instagram, taken without users’ permissions.

The company has been fined numerous times in Europe and countries like Australia for breaches of privacy laws.

In the U.S., critics say the use of Clearview by authorities puts everyone into a “police line-up”.

The company’s high-tech system allows law enforcement to upload a photo of a face and find matches in a database comprising of billions of images it has collected.

It then provides links to where matching images appear online.

The tool is considered to be one of the world’s most powerful and accurate.

While the company is banned from selling its services to most U.S. companies, there is an exemption for police.

Continue Reading
Live Watch Ticker News Live
Advertisement

Trending Now

Copyright © 2023 The Ticker Company PTY LTD