It's fascinating to me that all the information required for the analysis is publicly available. This seems like a big shift from an era when only an intelligence agency would have been able to perform this sort of analysis.
The "open source intelligence" report on the MH17 incident -- where people found social media videos of the Buk missile proceeding through the countryside before being used to down the passenger jet -- was super interesting in a similar vein: https://www.bellingcat.com/news/uk-and-europe/2015/10/08/mh1...
Perhaps these short-enough-for-Twitter statements aren't ideal, because they can easily be construed as demanding retribution without due process. But the righteous indignation doesn't seem out of place. Not only are the killings themselves shocking, but the BBC alleges that the Cameroonian government was itself complicit in the cover-up by calling the viral video "fake news", and putting the burden of investigating its own military onto outsiders.
The BBC could have worded its closing tweet more clearly, but I don't think it's wrong for them to reiterate how rich it is for the Cameroonian gov't to make promises of fair trials when it seems to have done its damndest to avoid bringing justice at all.
Seems fine, unless you read it as them arguing that they should not be given a fair trial; I read it the opposite way, as them arguing that the women and children should have been.
Media has a responsibility to present all facts impartially. But that does not mean that news media don't also have a responsibility to put those facts into context. Especially when it concerns the weak vs the strong (civilian victims vs violent government). That is what they are doing here. There is no encouragement to mete out revenge. It is juxtaposing the due process the alleged perpetrators will face with the lack of due process their victims suffered.
By the same token, if soldiers alleged of a war crime were found to have suffered from a miscarriage of justice, I'd expect the BBC to cover that side too—and I expect they would.
Those final statements on the Twitter thread stuck out for me too - they seemed a little incongruous. But they're taken (as is the entire thread) from the BBC Africa video, where they actually make perfect sense as the closing remarks.
It's hard to watch this sort of footage for me now. Once upon a time when I was younger I could handle it, but with young children the same sort of age it really tears me apart. Though, I think people should be forced to confront it, because it's the only way it's ever going to improve.
This is one example of routine acts committed by Cameroon’s military especially in the English-speaking regions. No reaction from international community. Instead, the west is asking Cameroon to investigate. They’re basically asking Cameroon govt to cover up its atrocities.
Video clearly shows Cameroon military shooting at mother with children. Same as hundreds of other incidents. What’s there to investigate?
Condemn the actions and let the govt face consequences.
I can't imagine this kind of analysis carrying much weight in 3-5 years as advances in machine learning and image manipulation turn the notion of "seeing is believing" on its head.
edit: Imagine running this process in reverse. Pick the culprit, time, and location. Then arrange the scene to exactly match the false accusation.
Probably didn't happen here, but in a few years it won't be possible to know the difference.
I disagree that "deepfakes" will be unverifiable. There is very robust forensic framework for analyzing videos for authenticity, and it often relies not just on the visual component but all the other vectors that open up as a result-EM waves being generated, audio analysis, etc. A deepfake that passes this analytical function would imply that we will not be seeing advancements in digital forensics due to AI, which does not make sense because there is no reason not to update forensic processes to keep up with changing technology.
I keep meaning to write up a longer form version of this idea that I've been kicking around to solve this.
This issue is often talked about as an unsolvable problem, but I feel like we already have the technology to deal with it.
Lots of devices, such as an iPhone already have a secure enclave that can be used for identity. Why not use this to sign videos and images for authentication purposes?
Then certain devices can produce verified, undoctored images and videos. Or even allow some amount of limited tweaking, and editing while attesting to the original.
This could be embedded into the file produced, like exif data, and read by services like Facebook and Twitter to badge the image/video as verified and undoctored. This could improve the amature case significantly.
Professional photography and video equipment could have similar capabilities so originals could be produced and proven to be genuine.
I guess this idea still depends on the hardware devices remaining secure, which is far from trivial, but it does increase the cost of producing a fake. And to keep producing really good fakes, you have to have a zero day to fake a genuine signature, or it becomes know that any media produced with that device version is suspect.
If such a system became widly adopted, at the very least, savvy users, and the media would be suspect of anything not signed. And perhaps the general public would also learn to be skeptical, particularly if the issue of fakes became more widespread.
> Pick the culprit, time, and location. Then arrange the scene to exactly match the false accusation.
Ah yes, but I logged on to Facebook at 8:12am and scrolled my news feed for a moment when I clicked a Chevrolet ad at exactly 8:15am (Facebook knows). I watched the video for exactly 12 seconds (Facebook still knows), then texted to my wife at 8:16am that we should think about getting a new car. I open up Reddit and log in at 8:17am and comment in /r/cars at 8:25 (Reddit knows). I started my Tesla at 8:32am and drove for 28 minutes (Tesla knows) to a Whole Foods, where I saved the location of my parked car (at almost exactly 9:00am) and entered the store. Grabbed a cold brew in a can and paid with my Visa card at 9:04 am (Visa knows), then walked to my office building and badged in through the door at 9:18 (my company knows), just in time to log on to the corporate network and send off a couple e-mails at 9:25, 9:26, and 9:29am. My whole day is tracked, timestamped and corroborated by third-parties already!
If we were able to easily collate the data that our workplaces, our GPS, our phones have on us it would be virtually impossible to place someone in a place they are not! And to go full dystopian, it's not too big a stretch to imagine the government would love to have exactly this.
It's not that it will be impossible to detect (they probably will be for quite some time) as it's about nature of dissemination - you first see fake, and only in its echoes you possibly see the truth.
Now, problem is, if fakes are easy to make, there'll be more of them, particulary if they generate impressions and spread faster than rebutals. And as space of fakes is infinite compared to space of truth, with advanced tools, it can be concocted in such a way that falsifying it might be lengthy process. And, in some cases there, it may be too late.
I wonder if smartphone makers could include some kind of hardware chip to verify the authenticity of videos. I don't think it's possible to build an unbreakable solution, but you could just make it very expensive.
The camera device could use a chip that included an HSM module, and it could use steganography to create verifiable videos. Each private key could be tied to a unique device ID in the maker's database. This might even be a legitimate use for blockchain. The device maker could create chainpoint proofs that include device IDs and public keys and embed them in Bitcoin transactions.
The HSM would be tamper-resistant, so it would be very expensive to extract the private key from the chip, or you'd have to collude with the device maker.
All of this is a moot point if you can just jailbreak the phone, fake the GPS and accelerometer data, put the camera in front of a screen, and play some audio into the microphones. I wonder if there are ways to detect video and audio that have been replayed through a projector/speaker instead of recorded in the real world. It's probably difficult to get it pixel perfect and replay perfect audio, and a neural net could probably be trained to spot any errors. Or maybe the projector/audio chip makers could include some steganography in their output. They might include an undetectable signal to indicate that this data is being replayed through a device, instead of recorded in the real world.
But then if someone wanted to commit a crime and get away with it, they could reverse engineer the signals from these projector/audio chips, and play those in the background. So any recorded videos could be discounted as a fake.
Also, I was thinking about how this could be used to prevent piracy when people record the screens in a movie theater. The studios could include a signal in the video, and the smartphone/video camera makers could stop recording when they detect that signal. But then a criminal could just hold up a screen with some copyrighted content, and everyone's phones and security cameras would stop recording.
If phones ever come with light-field cameras, it would also be a lot harder to fake. We'd have to invent realistic light-field projectors, but that would be pretty awesome. I've always wanted a holodeck.
This has been fun to think about. Would make some interesting Black Mirror episodes.
Even if they get the image manipulation perfect there will be lots of other ways to double check. Like if the featured video was fake there'd quite likely be evidence that some of the people were elsewhere at the time, or victims still alive and so forth.
It all comes back to the original tip from a "Cameroonian source." Without that, they wouldn't have only had a ridgeline somewhere in (probably) Cameroon.
That mountain ridge trick, how reliable of an identifier is that? I'm not really questioning the validity of its use in this situation, but I'd be curious to know more about the nature of the ridge to gps coordinates mapping. How many ridges map to more than one gps coordinate (albeit at potentially very distant locations, one of which could be excluded if more information is available)? Or is the mapping completely injective? That'd be really cool, a bit surprising, but not infeasible.
Sounds like a fun data science project. (Draw a ridge and get a google street view image whose horizon matches the ridge as closely as possible, maybe? Might be fun to play with!)
I think they mention they located the mountains not from the ridge outline, but from a tip-off they received about where it all happened, and then they corroborated that tip against the ridge outline, building placement, roads, etc.
> After a tip off from a Cameroonian source, we found an exact match for that ridge line on Google Earth
I imagine it depends on the resolution. Taking it to the extremes you can imagine the horizon matched by 2 points a straight line slope. That may match many ridgelines. To the other extreme you can imagine a resolution down to the millimeter of the ridge. Catching every stone and tree. I can't imagine many duplicate ridgelines there.
Just in case anyone else is as confused as I was, if you don't hit "play" on the picture of the ridge-line, you won't see the match. Or maybe I'm the only one who was wondering how that could be considered a match, until I figured out to do that.
Could BBC please share the code for matching mountain ridges with locations?
They claim that the first mountain ridge was located by a tip from a Cameroonian source. Here they verify the ridge with the mountain ridges from a height map.
But later on without mentioning any tip or source they "found" an Channel4 news report in archives with again a mountain ridge match. I consider 3) possibilities:
1) They manually watched all the footage from the last few years in that area, freeze framing them whenever they see a mountain ridge, then manually trace it. I don't believe this is what happened, that's too many man hours.
2) They knew through other means (perhaps camera gps location metadata, perhaps report description/summary) that the report video was taken near the massacre video. Because of this metadata/textual data they were able to realize they had nearby footage of an outpost by querying for the massacre location. But then they fail to mention this trivial step and make a bit of a show by highlighting some features and again a mountain ridge. So in this case they have a database of archive footage and filter by location/time.
3) They have automated software for isolating mountain ridge profiles from footage (perhaps edge detection? remove all edges that don't appear to undergo rigid body motion? but how filter clouds etc?), and matching software for locating where on the heightmap it the footage was made. In this case it is explainable without ridiculous man hours or ridiculous showmanship (instead of mentioning say GPS coordinate metadata of the journalism camera). But in this case perhaps BBC could share this code with the public?
Below the BBC's thread, an insightful comment by John Odande:
> As Africans, we like saying "African solutions to African problems" yet it is funny that we continue relying on international media to relay accurate information and challenge governments' positions all over the continent. Great job BBC Africa!
"Bemoan" is a strong word. The government whose soldiers murdered these women and children called it "fake news", declaring their innocence before even investigating. The BBC is criticising that disgusting behavior by contrasting their emphasis of the soldier's legal innocence until proven guilty with the neglect they exhibited for the people who were executed.
Heart breaking. Congratulations to the BBC for this amazing analysis. But real justice would require telling the story of how these women came to be murdered. This region in Cameroon is a flash point for a regional fight against a Muslim fundamentalist / warlord group called Boko Haram (translates as “western education is a sin”). How and why did these four civilians come to be murdered as a part of this tragic conflict in one of the poorest parts of the world?
I'm shocked how well Twitter worked here, normally I dread reading long tweet threads but this worked really well. Short videos and a good mix of media helps a ton.
In France the far right leader is being prosecuted for “promoting terrorism” for having shown a killing video from ISIS to condemn it. I can see why people would be shy about showing the actual violence.
[+] [-] dddddaviddddd|7 years ago|reply
[+] [-] mherdeg|7 years ago|reply
[+] [-] credit_guy|7 years ago|reply
[+] [-] usgroup|7 years ago|reply
https://www.bellingcat.com
[+] [-] scrumper|7 years ago|reply
[+] [-] mcguire|7 years ago|reply
On the other hand, this seems to be...below...the BBC:
"The government statement makes clear that all these men enjoy the presumption of innocence, and that they will be given a fair trial."
"The two women killed outside Zelevet received no trial at all.
"No presumption of innocence was extended to the children who died with them."
[+] [-] danso|7 years ago|reply
The BBC could have worded its closing tweet more clearly, but I don't think it's wrong for them to reiterate how rich it is for the Cameroonian gov't to make promises of fair trials when it seems to have done its damndest to avoid bringing justice at all.
[+] [-] elyobo|7 years ago|reply
[+] [-] Arn_Thor|7 years ago|reply
By the same token, if soldiers alleged of a war crime were found to have suffered from a miscarriage of justice, I'd expect the BBC to cover that side too—and I expect they would.
[+] [-] dtf|7 years ago|reply
https://www.youtube.com/watch?v=XbnLkc6r3yc
I'm not sure whether it's the medium (Twitter), or the lack of editing that makes them jar in the thread.
[+] [-] segmondy|7 years ago|reply
To kill unarmed women and children? Sheesh, what would you have written in place of those?
[+] [-] exotree|7 years ago|reply
[+] [-] balladeer|7 years ago|reply
Some other interesting investigative materials from the same podcast (Warning: may contain violence/nsfw etc):
https://www.nytimes.com/interactive/2018/04/04/world/middlee...
https://www.nytimes.com/interactive/2018/04/04/world/middlee...
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] junaru|7 years ago|reply
[+] [-] cylinder|7 years ago|reply
[+] [-] aidos|7 years ago|reply
It's hard to watch this sort of footage for me now. Once upon a time when I was younger I could handle it, but with young children the same sort of age it really tears me apart. Though, I think people should be forced to confront it, because it's the only way it's ever going to improve.
[+] [-] funwie|7 years ago|reply
Video clearly shows Cameroon military shooting at mother with children. Same as hundreds of other incidents. What’s there to investigate?
Condemn the actions and let the govt face consequences.
[+] [-] apo|7 years ago|reply
edit: Imagine running this process in reverse. Pick the culprit, time, and location. Then arrange the scene to exactly match the false accusation.
Probably didn't happen here, but in a few years it won't be possible to know the difference.
[+] [-] browsercoin|7 years ago|reply
[+] [-] kelp|7 years ago|reply
This issue is often talked about as an unsolvable problem, but I feel like we already have the technology to deal with it.
Lots of devices, such as an iPhone already have a secure enclave that can be used for identity. Why not use this to sign videos and images for authentication purposes?
Then certain devices can produce verified, undoctored images and videos. Or even allow some amount of limited tweaking, and editing while attesting to the original.
This could be embedded into the file produced, like exif data, and read by services like Facebook and Twitter to badge the image/video as verified and undoctored. This could improve the amature case significantly.
Professional photography and video equipment could have similar capabilities so originals could be produced and proven to be genuine.
I guess this idea still depends on the hardware devices remaining secure, which is far from trivial, but it does increase the cost of producing a fake. And to keep producing really good fakes, you have to have a zero day to fake a genuine signature, or it becomes know that any media produced with that device version is suspect.
If such a system became widly adopted, at the very least, savvy users, and the media would be suspect of anything not signed. And perhaps the general public would also learn to be skeptical, particularly if the issue of fakes became more widespread.
[+] [-] hbosch|7 years ago|reply
Ah yes, but I logged on to Facebook at 8:12am and scrolled my news feed for a moment when I clicked a Chevrolet ad at exactly 8:15am (Facebook knows). I watched the video for exactly 12 seconds (Facebook still knows), then texted to my wife at 8:16am that we should think about getting a new car. I open up Reddit and log in at 8:17am and comment in /r/cars at 8:25 (Reddit knows). I started my Tesla at 8:32am and drove for 28 minutes (Tesla knows) to a Whole Foods, where I saved the location of my parked car (at almost exactly 9:00am) and entered the store. Grabbed a cold brew in a can and paid with my Visa card at 9:04 am (Visa knows), then walked to my office building and badged in through the door at 9:18 (my company knows), just in time to log on to the corporate network and send off a couple e-mails at 9:25, 9:26, and 9:29am. My whole day is tracked, timestamped and corroborated by third-parties already!
If we were able to easily collate the data that our workplaces, our GPS, our phones have on us it would be virtually impossible to place someone in a place they are not! And to go full dystopian, it's not too big a stretch to imagine the government would love to have exactly this.
[+] [-] synthmeat|7 years ago|reply
Now, problem is, if fakes are easy to make, there'll be more of them, particulary if they generate impressions and spread faster than rebutals. And as space of fakes is infinite compared to space of truth, with advanced tools, it can be concocted in such a way that falsifying it might be lengthy process. And, in some cases there, it may be too late.
[+] [-] psergeant|7 years ago|reply
https://github.com/pjlsergeant/multimedia-trust-and-certific...
to YC this year. Definitely something that needs thinking about
[+] [-] nathan_f77|7 years ago|reply
I wonder if smartphone makers could include some kind of hardware chip to verify the authenticity of videos. I don't think it's possible to build an unbreakable solution, but you could just make it very expensive.
The camera device could use a chip that included an HSM module, and it could use steganography to create verifiable videos. Each private key could be tied to a unique device ID in the maker's database. This might even be a legitimate use for blockchain. The device maker could create chainpoint proofs that include device IDs and public keys and embed them in Bitcoin transactions.
The HSM would be tamper-resistant, so it would be very expensive to extract the private key from the chip, or you'd have to collude with the device maker.
All of this is a moot point if you can just jailbreak the phone, fake the GPS and accelerometer data, put the camera in front of a screen, and play some audio into the microphones. I wonder if there are ways to detect video and audio that have been replayed through a projector/speaker instead of recorded in the real world. It's probably difficult to get it pixel perfect and replay perfect audio, and a neural net could probably be trained to spot any errors. Or maybe the projector/audio chip makers could include some steganography in their output. They might include an undetectable signal to indicate that this data is being replayed through a device, instead of recorded in the real world.
But then if someone wanted to commit a crime and get away with it, they could reverse engineer the signals from these projector/audio chips, and play those in the background. So any recorded videos could be discounted as a fake.
Also, I was thinking about how this could be used to prevent piracy when people record the screens in a movie theater. The studios could include a signal in the video, and the smartphone/video camera makers could stop recording when they detect that signal. But then a criminal could just hold up a screen with some copyrighted content, and everyone's phones and security cameras would stop recording.
If phones ever come with light-field cameras, it would also be a lot harder to fake. We'd have to invent realistic light-field projectors, but that would be pretty awesome. I've always wanted a holodeck.
This has been fun to think about. Would make some interesting Black Mirror episodes.
[+] [-] tim333|7 years ago|reply
[+] [-] JoblessWonder|7 years ago|reply
[+] [-] tw1010|7 years ago|reply
Sounds like a fun data science project. (Draw a ridge and get a google street view image whose horizon matches the ridge as closely as possible, maybe? Might be fun to play with!)
[+] [-] flatb|7 years ago|reply
> After a tip off from a Cameroonian source, we found an exact match for that ridge line on Google Earth
[+] [-] bsurmanski|7 years ago|reply
[+] [-] bagacrap|7 years ago|reply
[+] [-] Sujan|7 years ago|reply
[+] [-] therein|7 years ago|reply
[+] [-] rossdavidh|7 years ago|reply
[+] [-] eveningcoffee|7 years ago|reply
[+] [-] Semiapies|7 years ago|reply
[+] [-] DoctorOetker|7 years ago|reply
They claim that the first mountain ridge was located by a tip from a Cameroonian source. Here they verify the ridge with the mountain ridges from a height map.
But later on without mentioning any tip or source they "found" an Channel4 news report in archives with again a mountain ridge match. I consider 3) possibilities:
1) They manually watched all the footage from the last few years in that area, freeze framing them whenever they see a mountain ridge, then manually trace it. I don't believe this is what happened, that's too many man hours.
2) They knew through other means (perhaps camera gps location metadata, perhaps report description/summary) that the report video was taken near the massacre video. Because of this metadata/textual data they were able to realize they had nearby footage of an outpost by querying for the massacre location. But then they fail to mention this trivial step and make a bit of a show by highlighting some features and again a mountain ridge. So in this case they have a database of archive footage and filter by location/time.
3) They have automated software for isolating mountain ridge profiles from footage (perhaps edge detection? remove all edges that don't appear to undergo rigid body motion? but how filter clouds etc?), and matching software for locating where on the heightmap it the footage was made. In this case it is explainable without ridiculous man hours or ridiculous showmanship (instead of mentioning say GPS coordinate metadata of the journalism camera). But in this case perhaps BBC could share this code with the public?
[+] [-] Myrmornis|7 years ago|reply
[+] [-] gpvos|7 years ago|reply
> As Africans, we like saying "African solutions to African problems" yet it is funny that we continue relying on international media to relay accurate information and challenge governments' positions all over the continent. Great job BBC Africa!
[+] [-] ismail|7 years ago|reply
[+] [-] hkeide|7 years ago|reply
[+] [-] happytoexplain|7 years ago|reply
[+] [-] zakum1|7 years ago|reply
[+] [-] wafflesraccoon|7 years ago|reply
[+] [-] x220|7 years ago|reply
[+] [-] DC-3|7 years ago|reply
[+] [-] qrbLPHiKpiux|7 years ago|reply
[+] [-] cm2187|7 years ago|reply
[+] [-] baud147258|7 years ago|reply