Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

Blue Origin investigated by American aviation watchdog

Published

on

The American aviation watchdog is reviewing safety concerns raised with Blue Origin

The Federal Aviation Administration says it will review safety concerns raised by former Blue Origin employees about billionaire Jeff Bezos’ space company.

The letter raised by former employees state that the company prioritised speed over safety on some of its rockets.

Those employees enlisted within the letter = claim they had “seen a pattern of decision-making that often prioritizes execution speed and cost reduction over the appropriate resourcing to ensure quality.”

In a statement, the Federal Aviation Administration said it was looking into safety allegations made in the letter and that it takes all such claims seriously. The aviation-safety agency regulates space launches and re-entries of space vehicles, as well as the operation of commercial launch sites.

Blue Origin said in a statement it stands by its safety record.

Continue Reading

Tech

AI is now part of our world. Uni graduates should know how to use it responsibly

Published

on

Artificial intelligence is rapidly becoming an everyday part of lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes.

MTStock Studio/ Getty Images

Rachel Fitzgerald, The University of Queensland and Caitlin Curtis, The University of Queensland

It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school.

But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly.

Here’s why this is a problem and what we can do instead.

AI use in unis so far

A growing number of Australian universities now allow students to use AI in certain assessments, provided the use is appropriately acknowledged.

But this does not teach students how these tools work or what responsible use involves.

Using AI is not as simple as typing questions into a chat function. There are widely recognised ethical issues around its use including bias and misinformation. Understanding these is essential for students to use AI responsibly in their working lives.

So all students should graduate with a basic understanding of AI, its limitations, the role of human judgement and what responsible use looks like in their particular field.

We need students to be aware of bias in AI systems. This includes how their own biases could shape how they use the AI (the questions they ask and how they interpret its output), alongside an understanding of the broader ethical implications of AI use.

For example, does the data and the AI tool protect people’s privacy? Has the AI made a mistake? And if so, whose responsibility is that?

What about AI ethics?

The technical side of AI is covered in many STEM degrees. These degrees, along with philosophy and psychology disciplines, may also examine ethical questions around AI. But these issues are not a part of mainstream university education.

This is a concern. When future lawyers use predictive AI to draft contracts, or business graduates use AI for hiring or marketing, they will need skills in ethical reasoning.

Ethical issues in these scenarios could include unfair bias, like AI recommending candidates based on gender or race. It could include issues relating to a lack of transparency, such as not knowing how an AI system made a legal decision. Students need to be able to spot and question these risks before they cause harm.

In healthcare, AI tools are already supporting diagnosis, patient triage and treatment decisions.

As AI becomes increasingly embedded in professional life, the cost of uncritical use also scales up, from biased outcomes to real-world harm.

For example, if a teacher relies on AI carelessly to draft a lesson plan, students might learn a version of history that is biased or just plain wrong. A lawyer who over-relies on AI could submit a flawed court document, putting their client’s case at risk.

How can we do this?

There are international examples we can follow. The University of Texas at Austin and University of Edinburgh both offer programs in ethics and AI. However, both of these are currently targeted at graduate students. The University of Texas program is focused on teaching STEM students about AI ethics, whereas the University of Edinburgh’s program has a broader, interdiscplinary focus.

Implementing AI ethics in Australian universities will require thoughtful curriculum reform. That means building interdisciplinary teaching teams that combine expertise from technology, law, ethics and the social sciences. It also means thinking seriously about how we engage students with this content through core modules, graduate capabilities or even mandatory training.

It will also require investment in academic staff development and new teaching resources that make these concepts accessible and relevant to different disciplines.

Government support is essential. Targeted grants, clear national policy direction, and nationally shared teaching resources could accelerate the shift. Policymakers could consider positioning universities as “ethical AI hubs”. This aligns with the government-commissioned 2024 Australian University Accord report, which called for building capacity to meet the demands of the digital era.

Today’s students are tomorrow’s decision-makers. If they don’t understand the risks of AI and its potential for error, bias or threats to privacy, we will all bear the consequences. Universities have a public responsibility to ensure graduates know how to use AI responsibly and understand why their choices matter.The Conversation

Rachel Fitzgerald, Associate Professor and Deputy Associate Dean (Academic), Faculty of Business, Economics and Law, The University of Queensland and Caitlin Curtis, Research Fellow, Centre for Policy Futures, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

Netflix boosts revenue forecasts after strong Q2 performance

Netflix boosts revenue and margin forecasts, driven by membership growth, price hikes, and ad business success

Published

on

Netflix boosts revenue and margin forecasts, driven by membership growth, price hikes, and ad business success

In Short:
– Netflix expects revenue of $44.8 billion to $45.2 billion this year after a strong second quarter.
– Popular shows and sustained subscriber growth have significantly enhanced Netflix’s market position and operating margin.
Netflix has raised its revenue and operating margin forecasts after a strong performance in the second quarter. The streaming service reported a revenue increase of 16% to $11.08 billion and a 46% rise in net profit to $3.1 billion, slightly surpassing its expectations. Factors contributing to this success included subscriber growth, price increases, and progress in its advertising segment.The company said that popular titles like “Squid Game” and “Ginny & Georgia” strengthened its position as a leading global streamer, even as traditional cable networks struggled. Netflix’s stock has nearly doubled within the past year, with a recent minor decline of 1.3% in after-hours trading.

The return of popular shows, including “Wednesday” and the final season of “Stranger Things”, is expected to further boost viewership.

Revenue Growth

Netflix now anticipates generating between $44.8 billion to $45.2 billion in revenue for the year, an increase from earlier projections.

The company’s operating margin rose to 34.1%, exceeding previous forecasts. Investment in programming, including international content, continues as the company aims to enhance subscriber retention. Shows like “Adolescence” and movies such as “Back in Action” have also drawn significant viewer engagement during the first half of the year.

Continue Reading

Tech

Privacy group targets AliExpress, TikTok, WeChat for violations

Privacy group noyb files complaints against AliExpress, TikTok and WeChat for EU data law violations

Published

on

Privacy group noyb files complaints against AliExpress, TikTok and WeChat for EU data law violations

In Short:
– Austrian group noyb has filed data privacy complaints against AliExpress, TikTok, and WeChat for EU law violations.
– Noyb aims to enforce user data access, previously targeting American firms leading to investigations and fines.
Austrian advocacy group noyb has lodged data privacy complaints against China’s AliExpress, TikTok, and WeChat.
The group alleges these companies are not complying with European Union laws that require them to provide users with full access to their data.Despite most tech firms offering tools to allow users to download their information, noyb claims that several Chinese companies complicate access to this data.

Kleanthi Sardeli, a data protection lawyer at noyb, stated that these companies collect extensive user data while refusing to grant full access as mandated by EU law.

Prior Complaints

Noyb has previously targeted American companies like Apple, Alphabet, and Meta, resulting in multiple investigations and substantial fines.

Earlier in January, noyb filed complaints against six Chinese firms, advocating for a suspension of data transfers to China and penalties that could amount to 4% of a company’s global revenue.

Continue Reading

Trending Now