Harlo Holmes currently works as a digital security trainer for Freedom of the Press Foundation, a non-profit organisation that works on supporting and defending public-interest journalism. Prior to this, Harlo was the technical lead for CameraV, an image and video verification app from The Guardian Project. During this interview we talk about CameraV, e-evidence used in courts, her experience being part of the cyberfeminist group Deep Lab and why she became interested in metadata.
I don't really know what started it, I've just always been really, really curious about it. I'm always interested in finding ways of triangulating information, it’s just very interesting to see how different data points can either support or refute a story. As you call it, the invisible, is such a great way of teasing out those relations. The reason why I've been so interested in the visual, digital image and video, is to do with my performance art practice. Those artifacts have always enthralled me.
Metadata is data about data. Primarily it's the site that portrays the actual, what one would call the “Ur event”, i.e. how a particular digital property came into existence. So going back to your previous question of why I find it so interesting, it all goes back to Walter Benjamin's idea of the replicability of art, how art in the modern age loses some of its luster because it’s so immediately accessible and is infinitely replicable. But metadata, I find, is a site that actually preserves that uniqueness even though things might be on their surface copied and filmed all over the place continuously, metadata is that site of originality.
CameraV begun its life as a mobile app named InformaCam created by The Guardian Project and WITNESS. It's a way of adding a whole lot of extra metadata to a photograph or video in order to verify its authenticity. It's a piece of software that does two things. Firstly it describes the who, what, when, where, why and how of images and video and secondly it establishes a chain of custody that could be pointed to in a court of law. The app captures a lot of metadata at the time of the image is shot including not only geo-location information (which has always been standard), but corroborating data such as visible wifi networks, cell tower IDs and bluetooth signals from others in the area. It has additional information such as light meter values, that can go towards corroborating a story where you might want to tell what time of the day it is.
All of that data is then cryptographically signed by a key that only your device is capable of generating, encrypted to a trusted destination of your choice and sent off over proxy to a secure repository hosted by a number of places such as Global Leaks, or even Google Drive. Once received, the data contained within the image can be verified via a number of fingerprinting techniques so the submitter, maintaining their anonymity if they want to, is still uniquely identifiable to the receiver. Once ingested by a receiver, all this information can then be indexable and searchable.
Image taken from The Guardian Project's guide on the CameraV app
Not that I know of. I do realise that in the United States the push towards outfitting police officers with body cameras is a really, really huge thing right now. It falls in line with the civil rights movement that we are having right now and so there are a few companies like VView, BodyWorn, and Taser, who definitely have similar ideas. They all have proprietary apps that are for evidence gathering, and I would be very curious to see how they are treating metadata and whether it has any similarities to the way that we are treating metadata.
The release that we currently have now is our beta release. We started in 2013 with our research and development and our alpha release and that afforded us a lot of time to shop various principles around to various organisations to see what would work and what features were important. Since we have gotten a good handle on that and have solidified what we exactly want to verify and how, we’re launching a beta in order to be put into people’s hands to test it. Once we start receiving more data from the beta test phase we will be able to move on from there and gauge what worked and what did not.
CameraV is not designed to send data to me, to WITNESS, or to The Guardian Project so I don't know. Ultimately it's designed to leave it up to the user how they want to share the data and with whom. This is why the beta test is going to be so important because we're going to be establishing ourselves within organisations as the receiver of media so we get to run these tests.
Technically speaking, it’s very difficult for those things to be manually forged. If someone took the metadata bundle and changed a couple of parameters or data-points - what they ultimately send to us in order to trick us would not verify with PGP, and each instance of the app has its own signing key. That said, I do realise that devices need to be trustworthy. This is an issue beyond CameraV: any app that uses digital metadata and embeds it into a photograph or video is going to have to be a trustworthy device.
That's a good question, and definitely one that touches on the reality of developing in the open. Anyone could fork the code, modify it to do whatever they wanted, and release it into the wild. This is why the app exists on Google Play and F-Droid, though: we should be able to scrutinise the source a user has downloaded the app.
In an ideal use case, verification in CameraV works the same way as with PGP. Key parties exist because human trust is important. CameraV easily allows you to export your public key from the app. If you give this key to someone when they're in the room with you, and compare fingerprints, then you trust that person's data more than if a random person just emailed you their public key unsolicited. If organisations want to earnestly and effectively use the app in a data-gathering campaign, some sort of human-based onboarding is necessary.
In terms of device integrity, Google has an API called "SafetyNet" which basically runs checks on the device to determine its trustworthiness on a variety of factors. This does require a live connection to the internet, though, but that's to be expected. (We started working on a similar solution just for CameraV; it wasn't complete by the time we launched beta, and at this point, it makes more sense to use Google's solution instead.) Although there are definitely some cases where a live connection to a network isn't feasible, that will just have to be noted in the resulting file's chain of custody. I think it's healthy to remember that CameraV is one solution in a suite of solutions people can use depending on how critical a piece of media is, and any one auditing the media will have to keep that in mind.
It definitely depends on the case and it depends on the judge and the legal team that are advocating for certain cases. If a case hinges around digital evidence, a certain number of forensic experts are going to have to be involved. If someone’s being accused of a war crime and the evidence is based on video, there have been cases where the accused will bring forth their own group of forensic analysts who are totally shady and dodgy and that will try to sway the court based on trumped up or falsified forensic evidence in order to undermine the original video.
So what CameraV tries to do is to add a bunch of irrefutable data points that would be easier for a judge to parse. Ultimately when you're talking to a group of forensic experts there is a specific set of domain knowledge that will go over a judge’s head so having the most irrefutable set of data points for a judge to look at is kind of the persuading factor, that is what we're trying to promote. The International Criminal Court (ICC) has been at the forefront of leading this type of research, and I'm really impressed with the standards that they are trying to establish around digital evidence. To a certain extent, we are following their lead because they are the ones that decide these cases and they're the ones who decide exactly what's relevant, what's not; and the protocols that need to be established for this type of evidence to be acceptable.
One thing that impresses me, is just the way that they archive a website. The actually have really awesome protocols for that, that I think people should follow across the board. When you access a webpage there is a bunch of metadata that you don't necessarily see, if you access a website today verses a week from now there is a potential that that stuff might change. So, preservation of when and, more importantly, how, a site was accessed is metadata that matters-- equally as important as the content of the page. That is just one example of how forward thinking they are.
Deep Lab is really great as a community, we all work on different things and all have different connections to technology. Some are more fine-arts focused and others are more interested in hacking and don't really think about how that might be art as well. It's a really great community to be involved with. Actually the thing that I love the most, because its exactly the opposite of what I do, is the work Lindsay Howard does, she's a curator but she's pretty much the first digital fine arts curator and so she's the one who put on the first digital arts auction at Phillips gallery. I love that idea. I'm very excited for her work.
It works the same way as a regular auction does with paddles but they were bidding on digital art works. She, and the group of artists she represents, have very interesting models (and this goes back to metadata again) for preserving the providence and individualness of the artworks. This introduces a lot of tension because everything is digital and everything digital can be cloned, copied and disseminated at lightning speed. Yet she and the group of artists that she works with have figured out compelling ways of preserving that uniqueness of art that you need in order to make money in the fine arts industry, which I think is really great.
Screenshot from Phillips website of some of the art sold during the Paddles ON! Auction (taken on 6 October 2015)
Our work is mostly United States focused but every once in awhile we train in other countries. I just came back from Argentina, where they don't necessarily have a restrictive press, but they do have a governmental and legal environment that can be very hostile to people who want to tell the truth. Of course, you hear about other places in Latin America where journalists are murdered, so it’s really important to maintain an international focus. In the United States, though, there is a little bit of mystery surrounding basic protection of private communication, that doesn’t always get addressed. In terms of security, newsrooms are generally preoccupied by their network security or whether or not there is a phishing attack that's going to infiltrate their network but, other than super-star journalists who take it into their own hands, there's very little knowledge of how an individual journalist can better protect their sources. So that's what we're focusing on, we're focusing on the communication flows between journalists and their sources.
Not really, I think of it as applying patches here and there. The only spot of tension that I find sometimes is in cases where journalists are not able to install their own software on their own machines, but that barrier is lifting more and more. When you're introducing PGP and they are like “I get it and this is great but no one is going to let me install it”. In this case you are going to have to have an intervention with the IT department and tell them “this is an important bit of software, so please allow this person to install it on their machine”. But I find that IT departments are very willing to have this conversation because they do realise the importance of it nowadays, especially after the Snowden revelations.
Officially I was working on a batch processing of documents, the dumps that you would get from someone who would drop an entire trough of emails from the Christie administration or whatever, and finding optimal ways to pick those apart that makes sense on a network scale. It also afforded me the opportunity to jump around to different departments there and find out what they were doing and what their concerns were. But because I really love operational security, I made it my business to make friends with their security department, to do trainings for journalists there on how to use these tools, and field questions from individuals about how to strengthen their skills.
We have a software package that I built that's called Unveillance, that does the batch processing on documents that is actually designed to be plug-in based. So if you wanted it to handle PDFs you could write scripts specific to how you wanted PDFs to work and be processed but also if you wanted image or videos you would write scripts particular to that. It's really just based off on mime type.
I'm a little shy about sharing so far, but it’s in current development to improve certain key aspects. Actually, we will be using it for batch processing submissions from CameraV pretty soon. I'm not a front end developer at all, so I've been researching other groups that have really strong front ends for visualising that data in compelling ways. There's a project called Verified Pixel that is headed by a guy named Sam Stewart (a Knight Foundation journalism fellow out of Stanford). It runs images against a number of APIs; triangulating data to ultimately vouch for an image. For instance, one thing it does is check an image against reverse image search engines, so you can say “oh someone sent in this picture of a war crime but actually this picture has been floating around the web for the last 12 months”. So I'm planning on hooking up the CameraV API to it as a plugin in the near future.
Once people have started to realise the potential of security implications of geo-tagging photos you do see a culture of devices and software manufacturers at like Google, Samsung and Apple have either started to turn those things off by default on your stock camera app, or making it explicit how you can. In apps like Instagram, things aren't automatically added to your photo map unless you actually tag it. Having the power to enable geo-tagging is definitely good for all of our privacy. Another factor with third party services like Twitter, is that they are really concerned about conserving their bytes. So it is the case that when you take a photo from your camera and submit it to their services, they are going to strip out a lot of metadata anyway because they don't want to host all that data. Not because of privacy reasons (privacy is often a secondary concern) but because of how much extra storage every byte adds.
Screenshot taking from Harlo's website on 6 October 2015