Ethical issues in digital technology
Ethics in computing asks: just because we can do something with technology, should we? AQA GCSE expects you to discuss four main ethical concerns — AI bias, censorship, privacy intrusions and the digital divide — weighing benefits to society against potential harms.
AI bias
Artificial intelligence systems learn from data. If that training data reflects historical prejudices, the AI can perpetuate or amplify them.
Examples of AI bias:
- Facial recognition software that is less accurate for darker skin tones (trained mostly on lighter-skinned faces)
- Recruitment algorithms that favour male candidates because historical hiring data skewed male
- Predictive policing tools that over-target minority communities
Why it matters: AI decisions affect real people — hiring, loan approvals, medical diagnoses, parole decisions. Biased AI can systematically disadvantage groups who are already marginalised.
Mitigations:
- Diverse, representative training data
- Regular bias audits of deployed models
- Human oversight and appeal processes
- Transparency in how decisions are made
Censorship
Censorship is restricting access to information or communication, often by governments or corporations.
Arguments for censorship:
- Prevents spread of harmful content (terrorism, child exploitation, incitement to violence)
- Protects citizens from misinformation and propaganda
- Can prevent real-world harm (e.g., instructions for making weapons)
Arguments against censorship:
- Limits freedom of speech and expression — a fundamental human right
- Can be used to suppress legitimate political dissent
- Difficult to apply fairly — who decides what is harmful?
- Citizens in authoritarian states cannot access independent news
Digital angle: Countries such as China use the "Great Firewall" to block foreign websites. Social media platforms must balance removing harmful content against accusations of bias.
Privacy intrusions
Technology enables unprecedented collection of personal data.
Forms of privacy intrusion:
- Surveillance cameras (CCTV + facial recognition) — governments and businesses track movements
- Online tracking — cookies, browser fingerprinting, social media monitoring profile behaviour
- Smart devices — smart speakers, phones, fitness trackers continuously collect data
- Data harvesting — apps collect contact lists, location, microphone access beyond stated need
Benefits vs harms:
| Benefit | Harm |
|---|---|
| Crime detection and prevention | Chilling effect on free speech |
| Personalised services | Loss of anonymity |
| National security | Data breaches expose personal info |
| Health monitoring | Discrimination by insurance/employers |
The digital divide
The digital divide is the gap between those who have access to technology and those who do not.
Dimensions of the divide:
- Access — no internet connection or device (rural areas, low income households)
- Skills — inability to use digital tools effectively
- Content — limited content in local languages
Consequences:
- Those without access cannot benefit from e-government, online education, telemedicine, job portals
- The gap compounds over time — lack of digital skills leads to lower earnings → less ability to buy devices → children fall further behind
Efforts to bridge the gap: subsidised broadband, low-cost devices, digital literacy programmes, free public Wi-Fi.
Balancing benefits and harms
Ethical decisions in technology are rarely black-and-white. A framework:
- Identify all stakeholders (users, companies, governments, society).
- List benefits and harms for each.
- Weigh against principles: fairness, privacy, freedom, safety.
- Consider who bears the risk vs who gets the benefit.
In exam answers, always present both sides before reaching a conclusion.
AI-generated · claude-opus-4-7 · v3-deep-computer-science