2017年07月17日

嘘のニュース:あなたの目にはまだ何も入ってこない。 嘘の出来事を音声やビデオでもっともらしく作る。(2)

Mr Goodfellow turned to a familiar concept: competition. Instead of asking the software to generate something useful in a vacuum, he gave it another piece of software—an adversary—to push against. The adversary would look at the generated images and judge whether they were “real”, meaning similar to those that already existed in the generative software’s training database. By trying to fool the adversary, the generative software would learn to create images that look real, but are not. The adversarial software, knowing what the real world looked like, provides meaning and boundaries for its generative kin. 

fool:騙す
meaning:意図するもの
boundaries:限界
kin:同類

Today, GANs can produce small, postage-stamp-sized images of birds from a sentence of instruction. Tell the GAN that “this bird is white with some black on its head and wings, and has a long orange beak”, and it will draw that for you. It is not perfect, but at a glance the machine’s imaginings pass as real. 

beak:くちばし

Although images of birds the size of postage stamps are not going to rattle society, things are moving fast. In the past five years, software powered by similar algorithms has reduced error rates in classifying photos from 25% to just a few percent. Image generation is expected to make similar progress. Mike Tyka, a machine-learning artist at Google, has already generated images of imagined faces with a resolution of 768 pixels a side, more than twice as big as anything previously achieved. 

rattle:ガラガラ音を立てる

Mr Goodfellow now works for Google Brain, the search giant’s in-house AI research division (he spoke to The Economist while at OpenAI, a non-profit research organisation). When pressed for an estimate, he suggests that the generation of YouTube fakes that are very plausible may be possible within three years. Others think it might take longer. But all agree that it is a question of when, not if. “We think that AI is going to change the kinds of evidence that we can trust,” says Mr Goodfellow.

Yet even as technology drives new forms of artifice, it also offers new ways to combat it. One form of verification is to demand that recordings come with their metadata, which show when, where and how they were captured. Knowing such things makes it possible to eliminate a photograph as a fake on the basis, for example, of a mismatch with known local conditions at the time. A rather recherche example comes from work done in 2014 by NVIDIA, a chip-making company whose devices power a lot of AI. It used its chips to analyse photos from the Apollo 11 Moon landing. By simulating the way light rays bounce around, NVIDIA showed that the odd-looking lighting of Buzz Aldrin’s space suit—taken by some nitwits as evidence of fakery—really is reflected lunar sunlight and not the lights of a Hollywood film rig.

artifice:技巧
verification:立証
recherche:非難して知識をひけらかすような凝った
bounce:反射する
nitwits:まぬけ
fakery:インチキ
rig:偽装

Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses. 

grappling:つかむ
atrocity:極悪
Cryptography:暗号解読法

Some have always understood the fragility of recorded media as evidence. “Despite the presumption of veracity that gives all photographs authority, interest, seductiveness, the work that photographers do is no generic exception to the usually shady commerce between art and truth,” Susan Sontag wrote in “On Photography”. Generated media go much further, however. They bypass the tedious business of pointing cameras and microphones at the real world altogether. 

presumption:仮定
veracity:真実
seductiveness:うっとりするようなこと
shady:いかがわしい
commerce:意見のやり取り
generic exception:一般的に除外しない 
tedious:退屈な

技術がどんどん進歩してきていて、偽物かどうかを判定するソフトも進んできている。いっぽうで、メディアも真偽がわかりように原画に印をつけている。tだ、こうした技術がどんどん進歩してしまうと、全くかそうなものでも現実的なものとして出来上がってしまうのもそう遠い話ではないようだ。

火曜日。今日はこれまで。ではまた明日。

swingby_blog at 21:17コメント(0) 

コメントする

名前:
URL:
  情報を記憶: 評価:  顔   星
 
 
 
livedoor プロフィール
プロフィール

海野 恵一
1948年1月14日生

学歴:東京大学経済学部卒業

スウィングバイ株式会社
代表取締役社長

アクセンチュア株式会社代表取締役(2001-2002)
Swingby 最新イベント情報
海野塾のイベントはFacebookのTeamSwingbyを参照ください。 またスウィングバイは以下のところに引っ越しました。 スウィングバイ株式会社 〒108-0023 東京都港区芝浦4丁目2−22東京ベイビュウ803号 Tel: 080-9558-4352 Fax: 03-3452-6690 E-mail: clyde.unno@swingby.jp Facebook: https://www.facebook.com/clyde.unno 海野塾: https://www.facebook.com TeamSwingby
講演・メディア出演

最新記事
月別アーカイブ
Recent Comments
記事検索
ご訪問者数
  • 今日:
  • 累計:

   ご訪問ありがとうございます。


社長ブログ ブログランキングへ
メールマガジン登録
最新のセミナー情報を配信します。
登録はこちらのフォームから↓