True or False: Under the current reforms, gig workers are automatically entitled to paid annual leave in Australia.
FALSE.
Were Australia’s 2024 AI Safety Standards legally binding or voluntary?
Voluntary / non-binding.
Under the EEV framework, which principle is most challenged when workers are not consulted before AI systems are introduced into workplaces?
Voice.
The “triangular relationship” in gig work makes regulation difficult because responsibility is split between which three parties?
The worker, the platform/intermediary and the customer/end-user.
According to Wright (2025), why might the EU AI Act still provide limited protection for workers despite being legally enforceable?
Because the EU AI Act focuses more on regulating the technology itself than protecting workers’ rights in workplaces.
The 2024 Closing Loopholes reforms introduced stronger protections for “employee-like” gig workers.
Which principle of the EEV framework does this reform primarily attempt to strengthen, and why?
Equity, because the reforms aim to improve protections and fairness for vulnerable gig workers.
A food delivery platform cuts driver pay rates by 15% overnight after introducing a new AI pricing model. Workers receive an automated email informing them of the change.
Under current Australian law and recent reforms, is this likely to be lawful?
Probably yes, at least partially. Most gig workers are still classified as independent contractors rather than employees, meaning platforms often retain broad power to alter pay structures. Recent Closing Loopholes reforms may allow the Fair Work Commission to establish minimum standards for some “employee-like” workers, but these protections are still developing and may not fully prevent sudden pay reductions.
This demonstrates that recent reforms are a significant step forward but may still be insufficient to address income insecurity and bargaining power imbalance in platform work
A company introduces AI software that tracks employees’ keystrokes, webcam activity and time away from their desks to generate “productivity scores”.
Under current Australian law and recent reforms, is this likely to be lawful?
Possibly. Australia currently lacks comprehensive workplace AI legislation specifically regulating algorithmic surveillance and automated management. Existing protections come from fragmented areas such as:
privacy law,
work health and safety obligations,
anti-discrimination law,
and general employment protections.
While some monitoring may be lawful, many critics argue current regulation is inadequate because workers often lack transparency rights and meaningful control over invasive AI systems.
*BONUS POINTS* A rideshare platform uses surge pricing algorithms that dramatically increase fares during emergencies and natural disasters.
Even if lawful, is this system fair or fundamentally exploitative?
For every pillar of the EEV framework, there is 100 extra points! I.e. for this question there is 600 points available!
Supporters argue surge pricing encourages more drivers onto the road during periods of high demand, improving service availability. Critics argue it exploits vulnerable consumers and prioritises profit during emergencies.
Framework application
Efficiency: allocates labour quickly during high demand
Equity: affordability and fairness concerns
Voice: consumers and workers have limited influence over pricing systems
A food delivery rider is permanently ‘deactivated’ from a platform after receiving several low customer ratings. The worker cannot speak to a human manager, appeal the decision properly, or understand how the algorithm judged their performance.
Is this an example of efficient business management or unfair digital control? At what point should algorithms be regulated like human managers?
Unfair digital control rather than simple business efficiency because the platform still exercises significant power over workers while avoiding many responsibilities of a traditional employer. Although algorithms can improve efficiency and customer service, workers often have little transparency or ability to challenge decisions that affect their income and job security. This creates a major power imbalance, showing that gig economy platforms still function similarly to employers and should therefore be held accountable for fair treatment and worker protections.
In 2024, Australia introduced voluntary AI Safety Standards rather than legally enforceable rules for workplace AI systems. Meanwhile, the EU introduced mandatory obligations and fines for high-risk AI systems.
Which approach do you think is more effective for protecting workers, and why?
The EU approach is likely to be more effective because its AI regulations are legally enforceable and include mandatory obligations and penalties for high-risk AI systems, whereas Australia’s current approach relies largely on voluntary guidelines with limited enforcement mechanisms. However, stricter regulation may also increase compliance costs and potentially slow innovation.
*BONUS POINTS*A company introduces AI software that predicts which employees are most likely to resign or become “low performers” and quietly reduces their shifts and promotion opportunities.
Even if lawful, is this system fair or fundamentally exploitative?
For every pillar of the EEV framework, there is 100 extra points! I.e. for this question there is 700 points available!
Many critics would view this as highly exploitative because workers are penalised through opaque predictions they cannot understand or challenge. Predictive AI systems may reinforce bias, create anxiety and undermine procedural fairness. Employers may argue predictive analytics improve workforce planning and efficiency, but the scenario demonstrates why many believe current reforms are insufficient to regulate increasingly intrusive forms of algorithmic management.
Framework application
Efficiency: workforce planning and predictive management
Equity: bias, fairness and transparency concerns
Voice: workers excluded from decision-making and unable to challenge outcomes