TopMyGrade

GCSE/Computer Science/AQA

CS8.1Ethical issues: AI bias, censorship, privacy intrusions, the digital divide; weighing benefits to society against harm

Notes

Ethical issues in digital technology

Ethics in computing asks: just because we can do something with technology, should we? AQA GCSE expects you to discuss four main ethical concerns — AI bias, censorship, privacy intrusions and the digital divide — weighing benefits to society against potential harms.

AI bias

Artificial intelligence systems learn from data. If that training data reflects historical prejudices, the AI can perpetuate or amplify them.

Examples of AI bias:

  • Facial recognition software that is less accurate for darker skin tones (trained mostly on lighter-skinned faces)
  • Recruitment algorithms that favour male candidates because historical hiring data skewed male
  • Predictive policing tools that over-target minority communities

Why it matters: AI decisions affect real people — hiring, loan approvals, medical diagnoses, parole decisions. Biased AI can systematically disadvantage groups who are already marginalised.

Mitigations:

  • Diverse, representative training data
  • Regular bias audits of deployed models
  • Human oversight and appeal processes
  • Transparency in how decisions are made

Censorship

Censorship is restricting access to information or communication, often by governments or corporations.

Arguments for censorship:

  • Prevents spread of harmful content (terrorism, child exploitation, incitement to violence)
  • Protects citizens from misinformation and propaganda
  • Can prevent real-world harm (e.g., instructions for making weapons)

Arguments against censorship:

  • Limits freedom of speech and expression — a fundamental human right
  • Can be used to suppress legitimate political dissent
  • Difficult to apply fairly — who decides what is harmful?
  • Citizens in authoritarian states cannot access independent news

Digital angle: Countries such as China use the "Great Firewall" to block foreign websites. Social media platforms must balance removing harmful content against accusations of bias.

Privacy intrusions

Technology enables unprecedented collection of personal data.

Forms of privacy intrusion:

  • Surveillance cameras (CCTV + facial recognition) — governments and businesses track movements
  • Online tracking — cookies, browser fingerprinting, social media monitoring profile behaviour
  • Smart devices — smart speakers, phones, fitness trackers continuously collect data
  • Data harvesting — apps collect contact lists, location, microphone access beyond stated need

Benefits vs harms:

BenefitHarm
Crime detection and preventionChilling effect on free speech
Personalised servicesLoss of anonymity
National securityData breaches expose personal info
Health monitoringDiscrimination by insurance/employers

The digital divide

The digital divide is the gap between those who have access to technology and those who do not.

Dimensions of the divide:

  • Access — no internet connection or device (rural areas, low income households)
  • Skills — inability to use digital tools effectively
  • Content — limited content in local languages

Consequences:

  • Those without access cannot benefit from e-government, online education, telemedicine, job portals
  • The gap compounds over time — lack of digital skills leads to lower earnings → less ability to buy devices → children fall further behind

Efforts to bridge the gap: subsidised broadband, low-cost devices, digital literacy programmes, free public Wi-Fi.

Balancing benefits and harms

Ethical decisions in technology are rarely black-and-white. A framework:

  1. Identify all stakeholders (users, companies, governments, society).
  2. List benefits and harms for each.
  3. Weigh against principles: fairness, privacy, freedom, safety.
  4. Consider who bears the risk vs who gets the benefit.

In exam answers, always present both sides before reaching a conclusion.

AI-generated · claude-opus-4-7 · v3-deep-computer-science

Practice questions

Try each before peeking at the worked solution.

  1. Question 13 marks

    AI bias

    Explain what is meant by AI bias and give one example of the harm it could cause.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

  2. Question 22 marks

    Digital divide

    Describe two consequences of the digital divide for individuals without internet access.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

  3. Question 35 marks

    Censorship trade-off

    Discuss whether governments should have the right to censor internet content. In your answer, give arguments for and against, then reach a conclusion.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

  4. Question 44 marks

    Privacy intrusion example

    Give two ways technology can intrude on a person's privacy and explain the harm each could cause.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

  5. Question 53 marks

    Reducing AI bias

    Suggest three measures that developers could take to reduce bias in an AI recruitment system.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

  6. Question 64 marks

    Benefit vs harm analysis

    A smart city plans to install facial recognition cameras on every street corner to reduce crime. Evaluate this proposal, considering at least one benefit and one ethical concern.

    Ask AI about this

    AI-generated · claude-opus-4-7 · v3-deep-computer-science

Flashcards

CS8.1 — Ethical issues in digital technology

11-card SR deck for AQA GCSE Computer Science topic CS8.1

11 cards · spaced repetition (SM-2)