AI and the Australian Workplace: Disruption, Opportunity, and the Shifting Future of Jobs

Artificial Intelligence (AI) is no longer merely a theoretical concept or the domain of research laboratories. It has become integral to Australian workplaces, reshaping industries and changing the relationship between workers, employers, and the broader economy. From large-scale manufacturing and logistics projects to more modest small retail shops applications, AI is redefining how tasks are completed, decisions are made, and value is created. In so doing, AI has sparked both enthusiasm and trepidation among the Australian public. On the one hand, many see it as a catalyst for greater productivity, cost efficiency, and creative innovation. Conversely, concerns about job security, fairness, and ethical use abound. This extensive report examines how Australians react to AI, which jobs may be at most significant risk, the sentiments underlying the public conversation, and the critical interventions recommended by the Australian Senate. Drawing on sixteen significant sources, including government reports, industry insights, and leading academic research, it provides a robust overview of AI’s current and future impact on Australia’s workforce.

A Rapidly Changing Technological Landscape - Australia’s technological ecosystem has evolved over the past decade. According to the Tech Council of Australia, a remarkable majority—nearly two-thirds—of industry leaders singled out AI as the most significant tech trend shaping the economy heading into 2025 (Tech Council of Australia, 2025, Reference 1). This forecast resonates across multiple sectors, including advanced manufacturing, healthcare, retail, and financial services. AI’s capacity to transform data-driven tasks, automate repetitive processes, and facilitate real-time analytics has prompted businesses of every size to explore how they can harness these emerging tools.

IT Brief Australia has noted that agentic automation—automation that can learn and adapt independently—stands poised to restructure even highly specialised industries, from pharmaceuticals to precision engineering (IT Brief Australia, 2025, Reference 2). With agentic automation, machines do not merely follow pre-set rules; they analyse new data, learn from complex patterns, and autonomously optimise operations. Such developments promise efficiency gains but also exacerbate uncertainties for workers unable to upskill rapidly enough to remain relevant in an AI-centric environment. Compounding this swift technological expansion is a surge in overall AI adoption. Google Australia reports that daily AI usage has grown substantially, with a quarter of workers employing AI tools regularly—a figure that has risen from comparatively modest figures even a few years ago (Google Australia, 2025, Reference 3).

This heightened adoption rate underscores the idea that AI is no longer the exclusive realm of multinational corporations. Even small and medium-sized enterprises now use AI-driven customer service chatbots, automated marketing platforms, and inventory optimisation solutions. Enthusiasm for these tools often revolves around their potential to lower overhead expenses, facilitate round-the-clock operations, and extract actionable insights from vast datasets. Yet the fact that AI systems can learn and make autonomous decisions—sometimes without direct human oversight—raises concerns about accountability, bias, privacy, and job displacement. The broader public discourse reflects this tension between excitement about efficiency and wariness about workforce upheavals.

2. Australia’s Diverging Reactions to AI

Despite national optimism, not all Australians share the same view of AI’s transformative power. Public sentiment is split, with one contingent viewing AI as an indispensable engine of progress and another worried about potential disruptions to the labor market. Numerous think tanks, unions, and community groups emphasise that new technologies impact various demographics differently. For instance, while technology-savvy professionals might find AI an exciting gateway to new career paths and better salaries, those in roles characterised by repetitive tasks face a more uncertain future.

2.1 Optimism and Embrace of Innovation

A sizable portion of the workforce lauds AI for its capacity to handle mundane or time-consuming tasks, freeing up human employees for more strategic or creative responsibilities. Data analytics, human resources, and software engineering professionals recognise AI’s potential to reduce human error and streamline complex workflows. The Australian Human Rights Commission has discussed how AI can enhance service delivery in areas such as disability support—improving inclusivity, accessibility, and targeted interventions (Australian Human Rights Commission, 2025, Reference 10). Moreover, AI-driven innovations in healthcare can accelerate diagnoses and produce better patient outcomes, especially in resource-constrained regional and rural healthcare facilities.

Similarly, certain small businesses have discovered that AI helps them remain competitive against larger firms. Automated customer relationship management (CRM) systems, AI-based content creation platforms, and predictive analytics for customer preferences allow these ventures to broaden their reach without significantly increasing labor costs. Small retailers and e-commerce entrepreneurs can reduce waste and improve profitability by optimising inventory and forecasting demand more accurately. This positive sentiment hinges upon the premise that AI, if integrated responsibly, amplifies human creativity, augmenting—rather than replacing—human workers.

2.2 Anxiety and Skepticism

Simultaneously, there is a palpable sense of anxiety among workers who fear AI could render many jobs obsolete. Older employees and those requiring less specialised expertise are particularly susceptible to displacement. These apprehensions are voiced powerfully by organisations like the Australian Council of Trade Unions (ACTU), which warns that up to one-third of the workforce is at risk of automation within the next decade (Australian Council of Trade Unions, 2025, Reference 4). Beyond manual labor and low-skilled trades, even higher-paying white-collar roles are threatened by advanced AI-driven analysis tools capable of undertaking document review, compliance checks, and sophisticated financial modeling at a fraction of human employees’ cost and time.

Moreover, skeptics point out that government policies have not kept pace with AI developments, leaving a regulatory vacuum. Workers confronted by job displacement may lack support for retraining, and oversight of AI-driven systems is often minimal. Critics argue that significant decisions about adopting specific AI platforms are typically made in corporate boardrooms with little input from employee representatives, thus exacerbating power imbalances and further undermining trust. Without adequate frameworks, AI-enabled surveillance, performance metrics, and algorithmic decision-making could also undermine worker autonomy, subjecting workers to opaque processes they cannot understand or contest.

3. Jobs at Greatest Risk of Displacement

As AI applications proliferate, questions arise about which occupations stand on the precipice of becoming obsolete. While some tasks—particularly those involving creativity, emotional intelligence, and nuanced decision-making—are more complex for AI to replicate, many existing roles remain vulnerable to automation. A study by Technology Decisions predicted that up to 1.3 million jobs could vanish by 2027 in Australia if companies fully embrace AI for routine tasks (Technology Decisions, 2025, Reference 7). These forecasts suggest that specific roles will face severe disruption:

1. Administrative and Clerical Positions: Workers have relied on manual data entry and document management for decades. AI-enabled optical character recognition (OCR) and natural language processing (NLP) systems can now manage these tasks more efficiently and with fewer errors. Over time, companies can reduce administrative overhead by employing advanced software tools, displacing many traditional clerical roles.

2. Customer Service Representatives: AI chatbots and automated phone systems already manage large volumes of customer inquiries in real time, often with minimal human oversight. Such automation is not restricted to large corporations—smaller enterprises also deploy subscription-based AI customer service solutions, thus requiring fewer full-time representatives.

3. Manufacturing and Logistics Operatives: The shift toward AI-enhanced robotics disrupts assembly line work and warehouse operations. Robots equipped with machine vision and real-time analytics can handle sorting, packaging, and distribution tasks, leaving human workers in more specialised positions, focusing on oversight, equipment maintenance, or complex troubleshooting.

4. Financial and Accounting Clerks: Functions such as invoice processing, expense auditing, and data reconciliation are increasingly handled by AI platforms that can detect anomalies and generate predictive reports automatically. Hence, accountants and clerks performing routine analyses may be replaced or forced to pivot into advisory roles emphasising strategic planning.

5. Routine Research and Analysis: In sectors like legal services or media, AI-driven software can handle basic contract review, standard legal research, and even rudimentary content generation. Human expertise remains essential for high-level interpretation and nuanced insight, but early-stage research tasks are now more prone to automation.

While these disruptions can be alarming, economists and analysts underscore that some “old” roles will evolve rather than vanish. In many scenarios, employees transition from purely mechanical tasks into responsibilities requiring oversight, interpretation, and complex problem-solving. Nevertheless, such transitions often demand significant retraining—a challenge that disproportionately affects workers who cannot readily acquire new technical competencies due to financial, geographic, or personal constraints. These issues risk amplifying social inequality and stranding entire workforce segments if unaddressed.

4. Skills Gap and Workplace Preparedness

A recurrent theme in discussions about AI adoption is the skills gap. Randstad Australia found that while a rapidly growing number of businesses rely on AI-driven solutions, only a minority of employees report receiving formal training on deploying, managing, or interacting safely with these technologies (Randstad Australia, 2025, Reference 5). This discrepancy fosters a sense of job insecurity. Employees in roles susceptible to automation may worry about being replaced and lacking pathways to shift into the newly created AI-centric roles requiring data science, coding, or algorithmic thinking.

Indeed, the skills gap cuts both ways. Employers lament that there are insufficient professionals experienced in data engineering, machine learning modeling, cloud architecture, and cybersecurity. Large technology firms can fill the gap by recruiting internationally or offering premium salaries to in-demand experts. Still, smaller companies, non-profit organisations, and regional institutions struggle to attract or afford these specialists. The imbalance forces a reevaluation of how national education and training systems prepare future workers.

Policymakers, corporate leaders, and educational institutions are gradually recognising the need to modernise curricula in high schools and universities. Incorporating AI fundamentals—programming basics, data ethics, algorithmic bias—into secondary education could help ensure that younger Australians enter the workforce with a foundational comprehension of AI’s possibilities and pitfalls. Those already in the workforce also require short-term certificate courses or subsidised training initiatives to remain relevant in a rapidly shifting labor market. While organisations like TAFE (Technical and Further Education) offer vocational courses in digital skills, many experts believe that large-scale, government-backed investment in AI-specific programs is crucial if Australia aims to protect workers from abrupt obsolescence.

5. AI’s Broader Economic Prospects

Beyond individual roles, AI has expansive implications for the Australian economy. The Australian Parliament’s extensive report on “The Future of Work: AI’s Impact on Industry, Business, and Workers” underscores that AI-driven productivity gains can lead to overall economic growth, potential wage improvements for specialised roles, and a more globally competitive Australia (Australian Parliament, 2025, Reference 13). For instance, manufacturing plants incorporating AI-enabled predictive maintenance can reduce downtime and wasted materials, potentially making Australian goods more competitive internationally. In logistics and supply chain management, AI-based optimisation has the potential to slash shipping times and costs, thus expanding export possibilities.

Yet, the International Monetary Fund cautions that such economic benefits must be weighed against the risk of exacerbating income inequality (IMF, 2025, Reference 14). Historically, technological revolutions have often favored those with capital or advanced skills. If AI predominantly benefits highly skilled workers, top-tier firms, or major cities, regional communities and lower-skilled workers may be left behind. Even though the economy might expand, marginalised populations could face heightened job insecurity, wages that fail to keep pace with living costs, and diminished career prospects. Policies encouraging inclusive growth and equitable access to AI resources—both for urban and rural Australians—become vital in ensuring that the benefits of AI are distributed fairly.

The World Economic Forum also addresses these challenges, noting that countries slow to adapt risk declining competitive standing in the global market (World Economic Forum, 2025, Reference 15). For Australia, which prides itself on high labor standards and progressive social protections, harnessing AI for economic growth must be coupled with robust social support systems. These protections include wage insurance programs, portable benefits for gig economy workers, or universal access to retraining for those displaced by automation.

6. The Senate’s 13 Key Recommendations for AI Regulation and Support

Against this dynamic backdrop, the Australian Senate established a Select Committee on Artificial Intelligence (AI) to chart a cohesive path. The committee interviewed industry specialists, labor representatives, academics, and international experts, culminating in a report containing thirteen critical recommendations to ensure responsible, equitable AI adoption across Australia. These recommendations tackle everything from legislative frameworks to education reform, shedding light on how to manage both the opportunities and threats of AI.

1. National Legislation for High-Impact AI Systems

The committee’s first recommendation advocates for legislation requiring organisations that deploy AI in critical sectors—such as healthcare, finance, and infrastructure—to adhere to strict guidelines around risk assessments, transparency, and oversight. This legislation would stipulate that AI be subject to regulatory review, particularly in areas where automated decisions could affect human welfare or civil liberties.

2. Classify General-Purpose AI as High Risk

Not all AI systems carry the same risk profile. The second recommendation calls for classifying general-purpose AI models, including generative AI, as “high risk” if they can fundamentally alter human livelihoods or shape sensitive decisions. These high-risk systems would be subject to stringent audits, disclosure protocols, and possibly licensing, ensuring they operate ethically and remain accountable to the public.

3. Bolster Australian AI Research Capacity

The third recommendation addresses the need for nationwide dedicated AI research hubs and enhanced university government funding. In line with this recommendation, policymakers would target specialised research in areas like explainable AI, bias detection, Indigenous-led AI innovation, and environmentally sustainable machine learning algorithms. Australia can reduce its reliance on foreign tech giants and cultivate domestic expertise by fostering local research.

4. Integration of AI Literacy into School Curricula

The fourth recommendation focuses on future generations. Incorporating AI education into primary and secondary curricula—from basic coding principles to discussions on ethics and data privacy—would prepare students for a world where AI is ubiquitous. Such education could help narrow the digital divide and broaden the pipeline of AI-literate graduates entering universities and technical programs.

5. National Reskilling and Upskilling Initiatives

The fifth recommendation calls for broad-based adult education programs specifically around AI skills. As many as one-third of Australian workers could see their roles changed or eliminated by automation. The committee encourages cost-sharing schemes between government and industry to create accessible retraining programs in data analytics, software development, AI ethics, and advanced manufacturing technologies.

6. Ethical AI Guidelines and Mandatory Bias Audits

The sixth recommendation centers on establishing a uniform code of AI ethics aligned with Australian values. Companies deploying AI for hiring, loan approvals, or other life-altering processes must undergo annual bias audits. These audits involve third-party experts examining algorithmic outputs to detect gender, racial, or age discrimination. Negative findings would mandate corrective action or risk incurring financial penalties.

7. Intellectual Property Rights for AI Training Data

Recognising that AI often relies on large corpuses of online text, images, or music, the seventh recommendation ensures that content creators retain control over how their work is used in training AI models. New frameworks would require AI developers to either license or otherwise fairly compensate creators whose works undergird the AI’s knowledge base. This move mainly protects Australian musicians, writers, and visual artists from losing royalties.

8. Data Protection Impact Assessments (DPIAs)

The eighth recommendation introduces mandatory DPIAs for organisations deploying AI solutions that handle sensitive personal data. Before launching an AI tool with potential privacy implications, companies must demonstrate that they have minimised data collection, instituted anonymisation measures, and built robust cybersecurity protocols.

9. Explainability in Automated Decision-Making

AI systems embedded in hiring, academic admissions, financial lending, or healthcare diagnostics must offer transparent reasoning under the ninth recommendation. This ensures that individuals adversely affected by an AI-generated decision can request a human review and explain how specific inputs led to that outcome. Such a measure defends due process and fosters public trust.

10. Expansion of Workplace Safety Regulations

The tenth recommendation extends existing Occupational Health and Safety (OHS) frameworks to include AI. Some employers might use AI-driven tools for employee monitoring, performance scoring, or workload assignment, so regulators must guarantee that AI does not enable intrusive surveillance or cause psychological harm. This would require an update of OHS guidelines to recognise AI-driven tools as potential risk factors.

11. Community Engagement in AI Governance

The eleventh recommendation calls for a robust public engagement plan, including workshops, digital platforms, and public hearings, so citizens can weigh in on how AI is used in their communities. Such transparency and dialogue would help allay fears and democratise technology decisions that could otherwise unfold behind closed corporate doors.

  1. Sustainability as a Core AI Priority - The twelfth recommendation ties AI policy to environmental goals. AI systems consume substantial computing power and resources, potentially contributing to carbon emissions. The Senate suggests that large-scale data centers and AI research projects incorporate energy efficiency, renewable energy usage, and carbon offset strategies, aligning AI growth with Australia’s broader sustainability commitments.

  1. International Collaboration and Strategic Autonomy - Finally, the thirteenth recommendation underscores the need for Australian leadership in global AI conversations. The Senate urges alignment with international standards while bolstering Australia’s technological sovereignty in sensitive areas such as defense and critical infrastructure. This dual-track approach ensures Australia remains competitive while safeguarding national interests in a rapidly evolving geopolitical landscape.

Ethical and Societal Dimensions - The Australian Human Rights Commission has cautioned that ethical lapses in AI deployment can undermine civil liberties, entrench systemic biases, and erode democracy (Australian Human Rights Commission, 2025, Reference 10). Such concerns resonate with the National Framework for AI Ethics and Governance, highlighting transparency, contestability, and fairness as essential principles (Australian Government, 2025, Reference 11). Yet critics note that while Australia’s AI framework is well-intentioned, its guidelines remain mostly voluntary, limiting their enforceability.

In practical terms, AI systems exhibit biases when trained on datasets that reflect historical inequities. A hiring algorithm might inadvertently filter out candidates from specific backgrounds if the training data uncritically incorporates past hiring decisions. Similarly, generative AI may reinforce gender or racial stereotypes in its outputs. Addressing these issues requires developing robust AI ethics standards, implementing ongoing audits, and empowering regulators to levy fines or impose corrective actions. Moreover, as AI is integrated into sensitive public sectors like healthcare, policing, and social welfare, the stakes of responsible usage climb significantly, given that mistakes or biases can have dire consequences for individuals’ lives.

Beyond bias, AI usage raises privacy considerations. Government agencies and private firms can accumulate vast data troves, gleaning insights about users or citizens. In principle, advanced algorithms can link data from disparate sources—purchasing habits, travel records, social media footprints—to create highly detailed profiles. Such capacities can facilitate targeted marketing or more accurate credit scoring and blur the lines around consent and personal autonomy. OpenGov Asia has examined how privacy regulations in countries worldwide grapple with AI-driven surveillance’s moral hazard (OpenGov Asia, 2025, Reference 12). Australia’s approach remains in flux, with calls for the Privacy Act’s expansion and heightened enforcement powers for relevant watchdog bodies.

Mitigating Risks and Maximizing Benefits - The following logical question centers on how Australia can harness AI’s strengths while mitigating its hazards. The combination of broad-based AI literacy, proactive regulations, and workforce transition support appears integral to this balancing act. Some possible pathways include:

Investing in Education at All Levels: As recommended by the Senate, integrating AI fundamentals into secondary education ensures future workers develop critical thinking and an appreciation for how algorithms function. Tertiary institutions can move beyond theoretical AI courses by fostering collaborations with industry and offering projects that give students real-world experience. Scholarships and subsidies for AI-relevant training can open doors to groups historically underrepresented in technology sectors, including women, Indigenous Australians, and those from lower socioeconomic backgrounds.

Strengthening Social Safety Nets: For all the talk of new job categories emerging—AI engineers, data scientists, machine learning model trainers—these roles often demand advanced learning curves and specialized training. Policymakers must consider bridging allowances or extended unemployment benefits for workers in transition, along with fully funded short courses in data literacy. Without a social safety net, layoffs driven by automation can lead to long-term unemployment or precarious gig work.

Encouraging Ethical Innovation: Public trust is key to broad acceptance of AI solutions. Companies adopting AI ethically will likely see better brand loyalty and fewer regulatory hiccups. Conversely, organisations that use AI unethically—whether by ignoring biases or skirting privacy laws—may face reputational crises and legal liabilities. Mandatory auditing, transparent model documentation, and open channels for recourse can encourage an ecosystem of responsible innovation.

Regional and International Collaboration: Given AI’s global nature, Australia benefits from alliances with international research bodies, tech organisations, and regulatory agencies. Joint research projects can minimise duplication of effort and expedite breakthroughs, especially in AI safety and interpretability. Simultaneously, knowledge exchange regarding best practices in policy design can help Australia craft globally aligned and locally relevant frameworks.

Incentivizing Responsible AI Deployment: Policymakers might explore financial incentives—tax rebates, grants, or public procurement preferences—for companies that demonstrate responsible AI usage, robust staff training programs, and minimal displacement of their workforce. Such incentives could function in tandem with penalties levied for documented abuses. Australia could steer business leaders toward a more balanced approach by linking AI adoption to demonstrable social benefits.

Projecting Australia’s AI Future - There is little doubt that AI will continue revolutionising the Australian workplace over the coming years. Advancements in generative AI will allow systems to mimic—and perhaps surpass—human creativity in producing designs, texts, or even preliminary legal documents. Automated vehicles and autonomous drones could redefine transportation, mining, and agricultural industries. Innovations in quantum computing might accelerate AI’s analytical capacities, solving problems previously beyond computational reach.

McKinsey & Company’s detailed analysis projects that as many as 40% of job tasks in Australia could be automated within the next two decades. However, it clarifies that job categories often do not vanish overnight (McKinsey & Company, 2025, Reference 6). The reality is frequently more nuanced: partial automation of tasks within a role changes the job’s nature, leaving some responsibilities intact but demanding that workers pivot to tasks requiring human judgment, empathy, or creative thought. According to the McKinsey report, the net effect largely depends on how governments, industries, and educational institutions respond.

Those who embrace retraining, are open to new technologies, and operate in dynamic sectors may view AI as a powerful enabler. Others may be excluded from economic gains, particularly in stagnant or vulnerable job markets. The tension between these outcomes gives rise to debates about universal basic income (UBI), guaranteed retraining rights, or other policy instruments to ensure that no Australian is left behind in the AI revolution. For instance, some politicians and advocates argue that introducing a baseline income or bridging salary for displaced workers can provide a cushion, allowing them to upskill without incurring massive financial hardship.

In Conclusion: Building an Inclusive AI-driven Australia - From the dawn of mechanised looms to the modern era of cloud computing, technological shifts have regularly prompted both trepidation and transformation. AI represents this pattern’s latest—and arguably one of the most profound—iterations. While the long-term outcomes remain partly contingent on how quickly Australians adapt, the insights presented in this report underscore a fundamental truth: planning and proactive measures can significantly steer AI’s trajectory toward equitable and inclusive growth.

Australia finds itself poised between the promise of enhanced competitiveness, medical breakthroughs, and cross-sector innovation and the peril of job displacement, ethical quandaries, and heightened inequality. The thirteen recommendations from the Senate Select Committee offer a roadmap for navigating these challenges. They propose a careful balance between capturing economic gains and safeguarding core human values, from transparent decision-making to equitable workplace protections. Implementing these measures will require collaboration among policymakers, educational institutions, the private sector, and civil society.

The National Framework for AI Ethics and Governance (Australian Government, 2025, Reference 11) can be a cornerstone for shaping laws and corporate guidelines. Yet, it must be reinforced by consistent enforcement and real incentives. In parallel, community-level initiatives can boost AI literacy and ensure that laypeople, not just industry insiders, understand the implications of AI in everyday life. As the Senate recommends, opening channels for public debate is equally essential. If Australians from different regions, socioeconomic backgrounds, and fields are included in these discussions, it increases the probability that AI will become a collective asset rather than a divisive force.

In the final analysis, AI adoption is not a matter of if but how. Australia can allow market forces alone to determine outcomes, which risks abrupt dislocation for millions of workers, unchecked biases, and overshadowed ethical concerns. Or it can seize this moment to pioneer strategies that place human dignity, fairness, and sustainability at the heart of AI policymaking. Should Australia choose the latter path, it may become a global model for responsibly leveraging AI. The future hinges on deliberate action—and the decisions taken now will echo through labor markets, corporate boardrooms, and households for decades to come.

References

1. Tech Council of Australia. (2025). Australia’s Tech Leaders Identify AI as Defining Tech Trend in 2025. Retrieved from

https://techcouncil.com.au/newsroom/australias-tech-leaders-identify-ai-as-defining-tech-trend-in-2025/

2. IT Brief Australia. (2025). How Agentic Automation Will Reshape Australian Industries in 2025. Retrieved from

https://itbrief.com.au/story/how-agentic-automation-will-reshape-australian-industries-in-2025

3. Google Australia. (2025). AI Adoption in Australia: New Survey Reveals Increased Use & Belief in Potential. Retrieved from

https://blog.google/intl/en-au/company-news/ai-adoption-in-australia-new-survey-reveals-increased-use-belief-in-potential/

4. Australian Council of Trade Unions (ACTU). (2025). One in Three Australian Workers at Risk from AI Automation. Retrieved from

https://www.actu.org.au/media-release/one-in-three-workers-at-risk-from-ai-unions-call-for-fair-go-in-the-digital-age/

5. Randstad Australia. (2025). AI Skills Gap Threatens Australian Job Security. Retrieved from

https://cfotech.com.au/story/ai-skills-gap-threatens-australian-job-security-says-randstad

6. McKinsey & Company. (2025). Generative AI and the Future of Work in Australia. Retrieved from

https://www.mckinsey.com/industries/public-sector/our-insights/generative-ai-and-the-future-of-work-in-australia

7. Technology Decisions. (2025). AI to Automate 1.3M Australian Jobs by 2027. Retrieved from

https://www.technologydecisions.com.au/content/it-management/article/ai-to-automate-1-3m-aussie-jobs-by-2027-servicenow-894357622

8. Sky News Australia. (2025). AI and the Future of Australian Jobs. Retrieved from

https://www.skynews.com.au/australia-news/chatgpt-reveals-the-australian-jobs-most-and-least-at-risk-from-artificial-intelligence-with-some-surprising-gigs-making-the-list/news-story/0cbb029ff3f555a62376269ca9520aeb

9. PwC Australia. (2025). AI Jobs Barometer: Emerging Roles & Industry Trends. Retrieved from

https://www.pwc.com.au/services/artificial-intelligence/ai-jobs-barometer.html

10. Australian Human Rights Commission. (2025). Ethical AI and the Australian Workforce: Policy Recommendations. Retrieved from

https://humanrights.gov.au/our-work/legal/submission/utilising-ethical-ai-education-system

11. Australian Government. (2025). National Framework for AI Ethics and Governance. Retrieved from

https://www.industry.gov.au/publications/implementing-australias-ai-ethics-principles-selection-responsible-ai-practices-and-resources

12. OpenGov Asia. (2025). Australia’s AI Regulation Strategy: Balancing Innovation and Worker Protections. Retrieved from

https://opengovasia.com/2025/01/18/australias-tech-sector-embraces-ai-for-growth-in-2025/

13. Australian Parliament. (2025). The Future of Work: AI’s Impact on Industry, Business, and Workers. Retrieved from

https://www.aph.gov.au/About_Parliament/House_of_Representatives/About_the_House_News/Media_Releases/The_Future_of_Work_Report_Released

14. IMF. (2025). AI’s Economic Potential in Australia: Growth or Disruption? Retrieved from

https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity

15. World Economic Forum. (2025). AI, Automation, and the Shifting Workforce Landscape in Australia. Retrieved from

https://www.weforum.org/stories/2025/01/future-of-jobs-report-2025-jobs-of-the-future-and-the-skills-you-need-to-get-them/