Top 5 Things the AI Act Changes for Small Companies

Top 5 Things the AI Act Changes for Small Companies

🧠 Introduction: AI Regulation Isn’t Just for Big Tech Anymore

The EU Artificial Intelligence Act (AI Act) is the first law in the world to regulate AI across all sectors. While large tech platforms dominate headlines, this law also affects small businesses.

If you run:

  1. A SaaS platform

  2. An HR tool

  3. An online shop using smart recommendations

  4. A marketing agency using AI-generated content

…then this law applies to you.

Even if you use third-party tools like OpenAI, Claude, or Gemini, the way you use those tools could trigger legal obligations.

⚖️ 1. You Might Be Using a High-Risk AI System

The AI Act defines four categories of risk:

Unacceptable risk → Banned completely

⚠️ High risk → Strict controls and documentation required

🟡 Limited risk → Transparency obligations apply

Minimal risk → No legal duties beyond basic safety

You’re likely using high-risk AI if your system:

  1. Scores CVs or filters job candidates

  2. Evaluates credit scores or eligibility

  3. Manages biometric access

  4. Automates decisions in healthcare, education, or finance

Even using third-party software like a CRM or ATS plugin can make your business responsible, depending on how the tool is used.

🔍 Key takeaway: Risk is based on the context — not the vendor.

🛠️ 2. You May Be a “Provider” — Not Just a User

The AI Act separates roles. Each comes with different legal responsibilities.

You’re a provider if you:

  1. Build your own AI model

  2. Retrain or fine-tune someone else’s model

  3. Embed or repackage general-purpose AI into your product

You’re a deployer if you:

  1. Use the AI in internal workflows

  2. Offer it to users as part of your services

Sometimes, you’re both.

If you’re a provider, you’re legally required to:

  1. Perform conformity assessments

  2. Maintain detailed technical documentation

  3. Monitor and report system performance after release

💡 Even adjusting a prompt or retraining a chatbot can qualify you as a provider under the law.

🧾 3. FRIAs Are the New DPIAs (And SMEs Must Do Them)

A Fundamental Rights Impact Assessment (FRIA) is required before using high-risk AI in:

  1. Education

  2. Employment

  3. Credit scoring

  4. Healthcare

  5. Public services

Even if you’re using third-party AI, you’re expected to evaluate its risks to people’s rights.

FRIA focuses on:

  1. Risk of bias or discrimination

  2. Loss of transparency

  3. Restricted access to opportunities

  4. Misuse of profiling or categorization

🧩 Pro tip: Align your FRIA with existing DPIAs for GDPR compliance. The overlap is significant.

🧮 4. Transparency Rules Apply to Most AI Use Cases

Even if your AI use is not “high-risk,” you still have to follow transparency rules. These include:

  1. Letting people know when they interact with AI (e.g. chatbot)

  2. Explaining how AI-generated results are created

  3. Labeling AI-generated content (text, video, audio)

  4. Disclosing whether user data was used to train the AI

This applies to:

  1. Marketing chatbots

  2. Dynamic pricing tools

  3. Predictive analytics

  4. AI-generated emails or proposals

🧠 Create a reusable “AI Use Notice” and include it in your privacy policy or onboarding flows.

📊 5. You Must Monitor and Document AI Use — Continuously

Even small teams are expected to keep records of how they use AI.

Your documentation should include:

  1. A simple AI system register

  2. Logs of usage, retraining, or API modifications

  3. Known risks and how they’re addressed

  4. Any incidents, complaints, or biases discovered

✅ Use a Notion board, Google Sheet, or Excel file.
You don’t need fancy software — you just need proof that you’re paying attention.

🤖 What If You Use ChatGPT, Claude, or Gemini?

That’s where most SMEs land — and yes, you still need to comply.

You must:

  1. Identify whether the system is general-purpose AI (GPAI)

  2. Check if the provider has done their EU paperwork

  3. Keep documentation on how you use it and for what purpose

💬 Example: If you prompt GPT to screen CVs and make hiring suggestions, you’re deploying a high-risk system — and may even count as a provider.

🧰 SME Action Plan

Here’s how to get compliant, step-by-step:

✅ 1. List all AI tools

Include marketing, HR, analytics, customer service, etc.

✅ 2. Define your role

Are you a user, integrator, modifier, or full developer?

✅ 3. Check the risk

Use Annex III of the AI Act to classify your use.

✅ 4. Document it

Create a simple AI register with:

  1. Tool name

  2. Purpose

  3. Data used

  4. Risk category

  5. Compliance status

✅ 5. Prepare a FRIA (if needed)

Use our coming template to evaluate risks in a legally sound way.

📣 SME Support Coming Soon

We’re building the first AI compliance toolkit designed for small businesses.

🛠️ Inside the AI Act Toolkit:

  • ✅ FRIA Templates

  • ✅ AI System Registers (Excel + Notion)

  • ✅ “AI Use Notice” builder

  • ✅ Compliance checklists for OpenAI, Meta, and Google tools

🎯 Launching soon at privacy-docs.com
📬 Join the waitlist now to receive it the moment it drops.

🔗 Keep Exploring:

👉 What is GDPR+? A Simple Guide for SMEs

👉 How the DMA Affects Cookie Banners and Consent

👉 GDPR in 2025: What’s Actually Changing

Comments are closed.

Get in Touch with Our Privacy Experts

Schedule a Free Consultation

Looking to enhance your data privacy strategy and achieve GDPR & AI compliance? Our experts are here to guide you with tailored solutions. Contact us today and take the next step toward secure and compliant data practices.

  • 24/7 Support
  • Confidence that you are compliant
  • Regulatory Privacy Compliance

Ready to start your data privacy & AI compliance journey?

Fill in your details below and we will get back to you as soon as possible

    Top 5 Things the AI Act Changes for Small Companies

    Top 5 Things the AI Act Changes for Small Companies

    🧠 Introduction: AI Regulation Isn’t Just for Big Tech Anymore

    The EU Artificial Intelligence Act (AI Act) is the first law in the world to regulate AI across all sectors. While large tech platforms dominate headlines, this law also affects small businesses.

    If you run:

    1. A SaaS platform

    2. An HR tool

    3. An online shop using smart recommendations

    4. A marketing agency using AI-generated content

    …then this law applies to you.

    Even if you use third-party tools like OpenAI, Claude, or Gemini, the way you use those tools could trigger legal obligations.

    ⚖️ 1. You Might Be Using a High-Risk AI System

    The AI Act defines four categories of risk:

    Unacceptable risk → Banned completely

    ⚠️ High risk → Strict controls and documentation required

    🟡 Limited risk → Transparency obligations apply

    Minimal risk → No legal duties beyond basic safety

    You’re likely using high-risk AI if your system:

    1. Scores CVs or filters job candidates

    2. Evaluates credit scores or eligibility

    3. Manages biometric access

    4. Automates decisions in healthcare, education, or finance

    Even using third-party software like a CRM or ATS plugin can make your business responsible, depending on how the tool is used.

    🔍 Key takeaway: Risk is based on the context — not the vendor.

    🛠️ 2. You May Be a “Provider” — Not Just a User

    The AI Act separates roles. Each comes with different legal responsibilities.

    You’re a provider if you:

    1. Build your own AI model

    2. Retrain or fine-tune someone else’s model

    3. Embed or repackage general-purpose AI into your product

    You’re a deployer if you:

    1. Use the AI in internal workflows

    2. Offer it to users as part of your services

    Sometimes, you’re both.

    If you're a provider, you're legally required to:

    1. Perform conformity assessments

    2. Maintain detailed technical documentation

    3. Monitor and report system performance after release

    💡 Even adjusting a prompt or retraining a chatbot can qualify you as a provider under the law.

    🧾 3. FRIAs Are the New DPIAs (And SMEs Must Do Them)

    A Fundamental Rights Impact Assessment (FRIA) is required before using high-risk AI in:

    1. Education

    2. Employment

    3. Credit scoring

    4. Healthcare

    5. Public services

    Even if you’re using third-party AI, you’re expected to evaluate its risks to people’s rights.

    FRIA focuses on:

    1. Risk of bias or discrimination

    2. Loss of transparency

    3. Restricted access to opportunities

    4. Misuse of profiling or categorization

    🧩 Pro tip: Align your FRIA with existing DPIAs for GDPR compliance. The overlap is significant.

    🧮 4. Transparency Rules Apply to Most AI Use Cases

    Even if your AI use is not “high-risk,” you still have to follow transparency rules. These include:

    1. Letting people know when they interact with AI (e.g. chatbot)

    2. Explaining how AI-generated results are created

    3. Labeling AI-generated content (text, video, audio)

    4. Disclosing whether user data was used to train the AI

    This applies to:

    1. Marketing chatbots

    2. Dynamic pricing tools

    3. Predictive analytics

    4. AI-generated emails or proposals

    🧠 Create a reusable “AI Use Notice” and include it in your privacy policy or onboarding flows.

    📊 5. You Must Monitor and Document AI Use — Continuously

    Even small teams are expected to keep records of how they use AI.

    Your documentation should include:

    1. A simple AI system register

    2. Logs of usage, retraining, or API modifications

    3. Known risks and how they’re addressed

    4. Any incidents, complaints, or biases discovered

    ✅ Use a Notion board, Google Sheet, or Excel file.
    You don’t need fancy software — you just need proof that you’re paying attention.

    🤖 What If You Use ChatGPT, Claude, or Gemini?

    That’s where most SMEs land — and yes, you still need to comply.

    You must:

    1. Identify whether the system is general-purpose AI (GPAI)

    2. Check if the provider has done their EU paperwork

    3. Keep documentation on how you use it and for what purpose

    💬 Example: If you prompt GPT to screen CVs and make hiring suggestions, you’re deploying a high-risk system — and may even count as a provider.

    🧰 SME Action Plan

    Here’s how to get compliant, step-by-step:

    ✅ 1. List all AI tools

    Include marketing, HR, analytics, customer service, etc.

    ✅ 2. Define your role

    Are you a user, integrator, modifier, or full developer?

    ✅ 3. Check the risk

    Use Annex III of the AI Act to classify your use.

    ✅ 4. Document it

    Create a simple AI register with:

    1. Tool name

    2. Purpose

    3. Data used

    4. Risk category

    5. Compliance status

    ✅ 5. Prepare a FRIA (if needed)

    Use our coming template to evaluate risks in a legally sound way.

    📣 SME Support Coming Soon

    We’re building the first AI compliance toolkit designed for small businesses.

    🛠️ Inside the AI Act Toolkit:

    • ✅ FRIA Templates

    • ✅ AI System Registers (Excel + Notion)

    • ✅ “AI Use Notice” builder

    • ✅ Compliance checklists for OpenAI, Meta, and Google tools

    🎯 Launching soon at privacy-docs.com
    📬 Join the waitlist now to receive it the moment it drops.

    🔗 Keep Exploring:

    👉 What is GDPR+? A Simple Guide for SMEs

    👉 How the DMA Affects Cookie Banners and Consent

    👉 GDPR in 2025: What’s Actually Changing

      Thank you for registering!

      Your download is ready, click the button below.