News

We must understand threats in the technology we use every day

Amnesty International campaigners protest against human rights abuses linked to Vedanta's aluminium refinery in Orissa, India, 2010. Lewis Whyld /Press Association. All rights reserved.The greater the effectiveness and reach of
human rights defenders (HRDs) and organisations, the more attention will state
institutions give them. This means they can be more influential, but also that
these institutions will put more time and effort into knowing what they are up
to and, potentially, disrupting it.

Present threats

At Amnesty, we’ve seen a very significant
increase over the last two years in HRDs telling us that they fear their
communications are being monitored by their governments. We have, ourselves,
been the subject of illegal surveillance in Britain, the
country where the organization started and where we have our biggest office.
This is just the direct evidence. There is no doubt that many governments
intercept and monitor the communications of HRDs and organisations and in some
cases even aggressively attack their computers and phones by
hacking them.

As human rights organisations and HRDs, we need
to understand how this happens and how to protect ourselves. Not doing anything
puts people at risk. The least worst outcome if your communications are being
monitored might be that a government interferes with your research or mobilisation
efforts – for example evidence you may have otherwise been able to find could
be hidden, or your attempt to organise a protest gets scuppered.

But things can get much worse – activists get
arrested, tortured and killed because of the work they do, and it can be in
part because their communications, or your communications with them, were
insecure and vulnerable to government surveillance.

At Amnesty, we are keenly aware of these risks
and the responsibility to protect our communications and those we work with. We
have substantially strengthened our security systems and are investing in
training our staff, but we also want to contribute to greater digital security
for the wider human rights movement. Here are two examples of how we’re doing this:

The Secure Communications
Framework

Everyone knows that some governments can undertake
electronic surveillance, monitoring mobile calls and internet traffic. What is
not always known is that any government can do
this. The technology is cheap, readily available and easy to use.

The question that HRDs and organisations need
to ask themselves is not whether their government has surveillance
capabilities, but whether they are likely to use it against them. Next, if they
believe that they are at risk of surveillance, they need to decide how they
will protect themselves.

The Secure Communication Framework is a
practical tool for HRDs and organisations to guide them through the process of
assessing the risk of surveillance and the methods and tools they should use,
based on that risk. It was built to be simple and approachable for non-security
experts.

The framework was developed within Amnesty and
based on our own understanding of day-to-day human rights work and experience
working with HRDs and partner organisations in many countries. The framework
was developed thanks to the Ford-Mozilla
Open Web Fellows Program; it is open source so that anybody
can adapt if for their own purposes. We are currently working on documentation
to accompany the framework and will release it as a package in coming months.
You can find an overview here.

Campaigning for strong
encryption

Encryption is an essential
means of protecting our personal information. Encryption aims to make information accessible
only to its owner or its intended recipient. For example, if applied to emails,
encryption ensures that only the sender and the recipient can read the email;
if someone is intercepting your internet connection, they will only see scrambled
information.

Encryption
helps protect us from cybercriminals and government surveillance. It is not a
panacea for digital security, but it is essential to it. For HRDs, it means
their communications are protected against all but the most invasive kinds of
surveillance. Perhaps unsurprisingly, many governments don’t like encryption.
Countries such as Pakistan, India and Cuba have bans or restrictions on encryption in place.

Earlier
this year, the FBI tried to force Apple to create a version of their iPhone
software which would have allowed for its encryption system to be bypassed.
Such ‘backdoors’ to the
encryption deployed in devices or services could potentially make the data of all
users of a device or service vulnerable to being stolen or accessed.

Amnesty
is campaigning for stronger encryption
in products and services. We want governments to stop trying to ban, restrict,
or weaken it, and we want companies to strengthen encryption in their products
and clearly communicate with their users who can access their data.

As a
human rights movement, we have to understand the risks that come with digital
technologies. These are great tools for communications, outreach and
mobilisation, but they can very easily be turned against us. We have a
responsibility towards the victims we support, our sources and partners: whether
we campaign on human rights issues, provide legal advice or training, or fund
human right work, we all have a responsibility to protect our data and
communications with the people we work with. Many of us have started taking
concrete steps, but much more needs to be done.

What
the future holds

The
amount of our personal and behavioural data that lives in the cloud will
increase exponentially as every day objects become connected to the internet.
With the Internet of Things (IoT), new household goods are becoming ‘smart’:
thermostats that monitor heating patterns and activate autonomously, home
personal assistants with ‘always on’ microphones, and cars continuously connected
to the internet are just some of the already existing examples. Eventually,
almost every device we use, and even things like clothing will be connected. This
means that the potential for surveillance will be much bigger – unless systems
are built to protect privacy.

Earlier
this year, James Clapper, the US director of national intelligence said that,“In the future, intelligence services might use the [internet
of things] for identification, surveillance, monitoring, location tracking, and
targeting for recruitment, or to gain access to networks or user credentials.”

But
increased connectivity won’t be the only issue. Computers can process data much
more efficiently than humans and are becoming smarter every day. They
are already being used for predictive policing –
determining the likelihood of crime based on complex algorithms. There are many
risks in this approach – one of them is algorithmic bias: that the initial
programming includes biases, intentional or not, that can result in
discriminatory treatment against certain groups, for example religious or
ethnic minorities. The use of artificial intelligence in policing and crime
prevention is likely to spread significantly over the coming years.

These
emerging risks need urgent attention from the human rights movement. We need to
demand more transparency from companies developing these products and from
state agencies using them, and we need to campaign for strong human rights
protections, both in the technology and in law. If we don’t act swiftly,
today’s mass surveillance will seem like child’s play.

Comments Off on We must understand threats in the technology we use every day