top of page
Writer's pictureJames Lawn

Big Tech EU AI Regulation and $2tn Market Correction

The EU AI Act came into effect on 1 August 2024 – driving concerns about the consequences for Big Tech firms who will undoubtedly be among the most heavily-targeted names under the new rules.


“The AI Act has implications that go far beyond the EU. It applies to any organisation with any operation or impact in the EU, which means the AI Act will likely apply to you no matter where you’re located,” Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian Corporation. “This will bring much more scrutiny on tech giants when it comes to their operations in the EU market and their use of EU citizen data.”

The Nasdaq 100 Index moved into correction territory on 2 August 2024 - wiping out more than $2 trillion in value in just over three weeks.  “This is an amazing about-face, like we’ve crashed into a brick wall,” said Bill Stone, CFA, CMT (石比尔), chief investment officer at The Glenview Trust Company. “We had a heck of a straight line up, and those don’t last forever, especially since expectations got so high.


AI and AI Regulation are here to stay

US Lawmakers and Big Tech CEOs would seem to agree that we should be regulating AI, and regulating it early, rather than taking a more laid-back approach like governments did with social media.  “If this technology goes wrong, it can go quite wrong,” OpenAI CEO Sam Altman said at a congressional hearing in May last year.

Furthermore, whilst the S&P 500 had its biggest decline since December 2022, with Big Tech leading the way down, it is also true that the US equities benchmark went nearly 18 months to the end of July without a drop of at least 2% — its longest streak in 17 years, according to data compiled by Bloomberg.


HSBC's Max Kettner, CFA is still bullish on the Tech Sector,

“this is a healthy correction that we all want to lean in to.  With Q2 GDP growth at nearly 6%, I don’t know that qualifies as a recession - I might have skipped my lectures at Uni – but Q2 Earnings are still at 14% growth year on year.”

With or without market slumps, AI is here to stay and will continue its advancement into society for better and for worse.  Discussion about AI guardrails and regulation, and the deeper insights and understanding that we can develop as a result of this dialogue, are a critical part of ensuring AI advancements take the positive and responsible direction we want them to.


 “AI pushes us all to think deeper about what, why and how we do things. I welcome the responsible way we are approaching AI in general - so different from how we approached ‘the internet’ when it took far too long to care about what responsibility really meant with a general purpose technology,” says Bronwyn Kunhardt, cofounder at Polecat Intelligence™.

What is the EU regulating for?

The EU AI Act, is one of a number of landmark regulations from around the world that will aim to govern the way companies develop, use and apply AI.  Most of all, it will affect EU citizens, who will see the benefits of the Act’s protection as well as a the Act’s constraints in terms of the technologies they may not be able to access.  It will also affect the big global tech companies engaged in building and developing the most advanced AI systems.  In some cases, it will also impact those companies deploying the AI systems.


The types of risks the EU AI Act aim to protect citizens from include:

Prohibited AI practices that pose unacceptable risks


  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making

  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances

  • biometric systems and social scoring classifying individuals or groups by race, political opinions, religious or philosophical beliefs, sexual orientation, social behaviours or personal traits

  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage for predictive policing

  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons


High risk AI practices that require significant diligence and transparency


  • critical infrastructures (e.g. transport), and safety components of products (e.g. AI application in robot-assisted surgery), that could put the life and health of citizens at risk

  • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)

  • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures)

  • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)

  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)


Limited risk General-purpose AI (including generative AI)

The AI Act introduces specific transparency obligations for general-purpose AI, for instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers also have to ensure that AI-generated content is identifiable and AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes. 


The Need for Responsible AI Regulation is Clear

The debate on how well regulation is protecting citizens vs impeding innovation will continue to ebb and flow, like the tech stock market, over the coming months and years. This is the way it should be, but the need for the types of protection targeted by the EU AI Act, as we advance Responsible AI solutions of the future, would seem to be pretty clear.

Comments


bottom of page