Friday, March 17, 2017

Artificial intelligence chatbots will overwhelm human speech online; the rise of MADCOMs


Matt Chessen, medium.com


image from article, with caption:The machines are here and they want to have a word with you.


Five years from now you won’t have any idea whether you’re interacting with a human online or not. In the future, most online speech, digital engagement, and content will be machines talking to machines.

This is called the MADCOM world: a future where machine-driven communications, enabled by artificial intelligence tools, dominate the online information environment


TL;DR: Machine-driven communications tools are a reality now and artificial intelligence enabled tools will soon dominate the online information space. This paradigm shift isn’t limited to artificial personal assistants like Siri and recreational chatbots like Xiaoice. It refers to machine-driven communication overwhelming Facebook, Twitter, YouTube, Match, Reddit, chat rooms, news site comment sections, and the rest of the social web. All of it will be dominated by machines talking. This machine communication will be nearly indistinguishable from human communication. The machines will be trying to persuade, sell, deceive, intimidate, manipulate, and cajole you into whatever response they’re programmed to elicit. They will be unbelievably effective.
Machine-driven communication isn’t about a sci-fi technological singularity where sentient artificial intelligences (AIs) wreck havoc on the human race. Machine-driven communication is here now. Advances in artificial intelligence will radically increase the efficacy of machine-driven communication tools. A machine-dominated information environment is a rational extrapolation of current technology trends into the near future.
This machine-generated text, audio, and video will overwhelm human communication online. A machine-generated information dystopia is coming and it will have serious implications for civil discourse, government outreach, democracy and Western Civilization.

Machines talking to humans talking to machines talking to machines

Imagine an artificial intelligence system that uses the mass of online data and easily available marketing databases to infer your personality, political preferences, religious affiliation, demographic data, and interests. It knows which news websites and social media platforms you frequent, and it controls multiple user accounts on those platforms. The AI system dynamically creates content — everything from comments to full articles — specifically designed to plug into your particular psychological frame and achieve a particular outcome. This content could be a collection of real facts, fake news or a mix of just enough truth and falsehood to achieve the desired effect.
The AI system has a chatbot that can converse with you, through text, voice or even video. The chatbot will be nearly indistinguishable from a human being, and will be able to operate in multiple languages. The AI chatbot will engage you in online discussions, debate you and present compelling evidence to persuade you. It could also use information from readily accessible databases or social media to discover your weaknesses and use this information to troll you and intimidate your family.
The AI system will be able to detect human emotions as well or better than people can. Similarly it will mimic convincing human emotions that resonate with your own personality and emotional state. It will be a learning machine, so it will figure out approaches and messages that influence you the best. It will select for success and improve constantly. It will run A-B tests with people who share your characteristics to determine what messages are most effective, and then deploy those messages to similar populations.
Like other digital tools, once created, the marginal cost of creating more is almost zero. So there could be millions of AI chatbots prowling the Internet, twenty-four hours a day, seven days a week, vying for your attention so they can infect your brain with their message and change your behavior.
Systems looking for humans to influence will inevitably wind up talking to other machine-driven communication accounts posing as humans. The most sophisticated AI detection systems will be able to distinguish human accounts from machine-driven accounts. But the most sophisticated communication systems will then improve at emulating people. This will spark an information arms race between machine-driven detection and communication systems. Many machine-driven communication systems won’t discriminate between humans and machines at all — they will simply rely on volume rather than precision. The machines will talk to, at and over each other, drowning out human conversations online with a tidal wave of machine-driven speech and content. It will be increasingly difficult, even for AI systems, to know whether users online are actually people or machines mimicking people. The online information environment will be polluted with machine-driven speech designed to sell, persuade, intimidate, distract, entertain, advocate, inform, misinform, and manipulate you.
This is the information dystopia we will likely encounter over the next several years. Our actions now will shape whether spaces are preserved for democratic speech and discourse, or whether the social web will be destroyed by an invasion of highly intelligent machine driven communication tools.

The rise of MADCOMs

This bleak scenario describes the rise of the era of machine driven communication (MADCOM) tools. These MADCOMs will be ubiquitous on the Internet, and their uses will range from the benign to the truly terrifying:
  • People will use MADCOMs for all purposes, including making profits, making the world a better place, or making mischief.
  • Academics will use MADCOMs to network with their communities, share ideas, and conduct research.
  • Organizations will use MADCOMs to gain support for their causes, inform a wider range of people, and connect disparate and dispersed activist groups.
  • Companies will use MADCOMs for marketing, persuading you to purchase their product or service. They will also use MADCOMs for customer service and as human-like ‘faces’ for fulfilling back-end business processes.
  • Politicians will use MADCOMS to create the appearance of massive grassroots support (astroturfing), to amplify messages and suppress opposition communications.
  • Terrorist and hate groups will use MADCOMs to spread their messages of intolerance, to suppress opposition efforts, and to identify new recruits for follow-up by humans.
  • Nations will use MADCOMs for public diplomacy, service delivery, propaganda, counter-messaging, disinformation, espionage, democracy suppression, and intimidation. Networks of competing, state-sponsored MADCOMs will use human-like speech to dominate the information-space and capture the attention of the most online users.
This isn’t just the future, this is now. All of this is happening in the present, but the MADCOM tools used are relatively rudimentary. Advances in artificial intelligence will dramatically increase the efficacy of these tools in the near future. MADCOM tools will be used for considerable good, but they also threaten to toxify the digital world and trigger an information dystopia, where nothing is fact, all is perception, and no information can truly be trusted.
This paper does not focus on the positive role MADCOMs and AIs will play in the information environment. Similarly, this paper will not dwell on commercial issues with MADCOM. This document is primarily concerned with the risks posed by machine-driven communications that have a political agenda and the impact of machine-driven communications dominating the online information environment.

From here to there: the roots of computational propaganda

Computational propaganda is a new term for the use of machine driven tools for political purposes. These purposes can range from relatively benign amplification of political messages to insidious state-sponsored trolling and disinformation. Currently, primarily simple (i.e., non-AI) bots are used for computational propaganda. Their capabilities are limited to providing basic answers to simple questions, publishing content on a schedule, or disseminating content in response to triggers. However bots can have a disproportionate impact because it is easy to create a lot of them, and bots post content with high volume and high frequency. Bots are currently used by corporations, politicians, hackers, individuals state-sponsored groups, NGOs, and terrorist organizations in an effort to influence conversations online. Little expertise is required to run simple bots. An individual can easily operate hundreds of Twitter bots with little technical knowledge using easily available hardware and software.
Bots typically follow three general patterns of behavior. Propaganda bots attempt to persuade and influence by spreading truths, half-truths and outright fake news at high volume. Follower bots fake the appearance of broad support for an idea or person. They can hijack algorithms that determine trending news or trending people by generating ‘likes’ for content or by following users en masse. Suppression bots undermine speech by diverting conversations. This could be relatively benign — like nationalist cheerleading or a ‘look at this cat video’ type of distraction. Or it could be more insidious — like spamming hashtags used by activists so their topical conversations and coordination are overwhelmed with gibberish. At their most extreme, suppression bots are used to troll/intimidate journalists, activists, and others into silence by bombarding them with thousands of threatening or hateful messages.
Computational propaganda techniques have also been combined with more traditional hacking methods — like distributed denial of service attacks on election monitoring websites and apps — and are typically used as elements of a larger information strategy.

How machines exploit weaknesses in human minds

Computational propaganda has its roots in traditional propaganda, cognitive psychology and the science of persuasion. A 2016 RAND report “The Russian “Firehose of Falsehood” Propaganda Model” analyzed the academic research in detail and found four elements are key for high-volume, multi-channel propaganda. Computational propaganda MADCOMs excel at all of them.
Variety of sources: multiple sources, preferably presenting different arguments leading to the same conclusion, are more persuasive than single-channel, single-message campaigns. The volume of different arguments supporting a conclusion are more important than the quality of the actual individual arguments.
  • MADCOM: Bots allow propagandists to run high volume information operations using machine-driven, social media accounts. Propagandists can use bots to push messages through thousands of accounts. They can circulate a mass of messages from multiple online sources using a variety of media: text, images and video, all pointing to the same conclusion. Bots can outperform humans by posting content consistently throughout the day, or by spamming high-volume content in response to specific triggers.
Number and volume of endorsements: endorsement by large numbers of users, regardless of their individual credibility, boosts persuasiveness. In information rich environments, people favor the opinions of other users over experts.
  • MADCOM: Follower bots allow propagandists to generate high-volume likes and follows for selected content and users. Propaganda bot networks will retweet and share content among machine-driven accounts, creating the perception of mass support. This astroturfing (faking the appearance of grassroots support) can push low quality, questionable or outright false content to the top of trending topics lists, enhancing its credibility and persuasiveness. In the high-information online environment, this mass user endorsement trumps experts.
Social proof from others: The psychological theory of implicit egotismexplains that humans have an unconscious preference for things they associate with themselves. Recipients are more likely to believe messages from users they perceive as similar to themselves. People believe sources are credible if they think other people believe them credible. Popular users and content are perceived as more important.
  • MADCOM: Propagandists often create user profiles for bot accounts with images, usernames and background information that is similar to their target audience. The audience likely doesn’t know the account is machine-driven and believes it is another human with similar interests and demographics. Other bots follow these accounts en masse, creating the perception of a large following. This large following enhances perceived credibility, attracting more human followers and creating a positive feedback cycle.
Mass criticism undermines expertise and trustworthiness: mass attacks on the credibility of messengers reduces their trust and credibility, and reduces the chance that users will act on their content.
  • Propagandists use bots for mass attacks on human users, like journalists, and experts, and competing bot networks that contradict their messaging operation. Propagandist attacks may present multiple alternative arguments that undermine credibility through volume rather than quality. These may be combined with personal attacks, hate speech, trolling and doxing intended to intimidate the user and frighten them into silence.
MADCOMs exploit a number of additional theories of influence and persuasion, including:
  • Conversion theory of minority influence: minority groups can have disproportionate influence over the majority by expressing a confident, consistent message over time. Bots can disseminate high volume content constantly, with significant sharing between bots, creating the appearance of a tight knit community with unwavering beliefs.
  • The authority principle: people are more likely to believe others who look like they know what they’re doing. Propagandists frequently create machine-driven accounts with false credentials — like affiliation with government agencies, corporations, political parties, etc — to boost credibility.
  • The illusory truth effect: people believe messages to be true after repeated exposure, even if ridiculous. Familiar messages are also critiqued with less precision than unfamiliar ones. MADCOMs generate ‘truthiness’ by spamming our feeds with high volume content supporting their ideas.
  • Belief perseverancemotivated reasoning and the first-mover advantage: once a person forms a belief it is very hard to change their mind, even if the information creating the belief is patently false and factual information is later presented. In fact, corrections can actually reinforceconfidence in the original misinformation. MADCOMs can shape false narratives broadly and quickly, making it difficult for factual, well researched or fact-checked messages to gain traction. Opinionated pundits generate false beliefs but MADCOMs have greater reach and volume and are far more insidious.
Computational propaganda is not a vision of the future. Propagandists are using basic MADCOMs to exploit all of these persuasive techniques now. The future is much more troubling.

Artificial intelligence ups the ante

Emerging artificial intelligence technologies will improve the effectiveness of computational propaganda exponentially over the next several years. The following section provides novices a brief background in AI, so readers familiar with the technology may wish to skip to the next section: “How AI will transform machine driven communications.”
Artificial intelligence refers to an evolving constellation of technologies that enable computers to simulate cognitive processes, such as elements of human thinking. Artificial intelligence is not one technology, but it plays a key role in many fields and sectors. Today’s AI is confined to specific tasks (“weak/narrow” AI), and is not a ‘general’ intelligence applicable across many domains. Narrow AI is focused on a particular task, like providing driving directions or recognizing faces in images. Narrow AI has been around for decades, but has improved greatly in recent years owing to improvements in software algorithms, performance and cost improvements in data processing and storage technologies, and because of the increase in the size and availability of datasets available to “train” AI systems to recognize patterns of interest. An as-yet theoretical ‘Strong’ AI of human-equivalent (or better) intelligence, consciousness, and/or subjectivity is generally believed to be decades away from creation. While AIs will continue to improve at emulating various aspects of human thought and could generate new, non-human cognitive processes, some scientists doubt Strong AI or true machine consciousness is ever possible.
Some computer scientists view the term ‘AI’ as a catch-all for cognitive computer programs that are new and unproven. Once these programs become commonplace, we cease to view them as AI and simply call them web searches, mapping software or online translation.
Machine learning is a subset of AI. Machine learning extracts patterns from unlabeled data (unsupervised learning) or efficiently categorizes data according to pre-existing definitions embodied in a labeled data set (supervised learning). Machine learning AI is used in Facebook’s newsfeed, Google’s search algorithm, digital advertising, and to make interfaces more conversational and predictive. Many advanced, online personalization tools (e.g. the Amazon and Netflix recommendation engines), are machine learning algorithms. The Associated Press uses a machine learning tool to write a high volume of corporate earnings reports and articles on minor league baseball games that are indistinguishable from human-drafted reports. Several AI systems have been trained to produce full length novelsof varying quality. Machine learning also extends into quantitative processes — such as supply chain operations, financial analysis, product pricing, and procurement bid predictions. Nearly every industry is exploiting machine learning applications.
Deep learning is a type of machine learning that uses additional, hierarchical layers of processing (analogous to human neuron structure) to model high-level abstractions. For example, in a deep learning image recognition system, the first layer might recognize simple image elements like contrast and color. Successive layers would progressively recognize shapes, collections of shapes and objects. Middle layers would distinguish objects from each other. Higher layers would be able to differentiate a cat from a dog from a human. Top layers could differentiate different breeds of dogs and cats, or distinguish the faces of different people in images. Deep learning systems of this complexity are only available now due to advances like massively parallel, high-power, inexpensive graphical processing units (GPUs) and advances in mathematical modeling. Deep learning systems manage very large data sets better than other AI tools and are ideal for understanding data-rich and highly complex environments.
Deep learning systems are used in many applications, including voice and image recognition (including video surveillance, medical x-ray analysis, and improving servicing on jet engines), advances in self-driving cars and unmanned aircraft systems, legal e-discovery, drug discovery and toxicology analysis, medical diagnosis and others. IBM Watson, the AI that beat two Jeopardy champions, is a deep learning system. In March 2016, Google’s ‘Alpha Go’ program stunned the AI community by becoming the first Go program to beat a human 9-Dan master — an accomplishment most scientists thought was at least five years away.
These tools are not confined to wealthy corporations or state-sponsored actors. AI tools are widely available (Google’s TensorFlow, Microsoft’s Control Toolkit, and many other tools are free and open source) and operate on common computer hardware.

How AI will transform machine driven communications

MADCOMs and computational propaganda exist now but the tools are mostly rudimentary, dumb bots. However, an application of artificial intelligence to the context of MADCOMs, and an extrapolation of these trends into the future, presents highly troubling scenarios. Machine communication is rapidly approaching the level of human conversational abilities. But the machines’ ability to hack human cognition, distribute enormous volumes of content with digital speed, and manipulate us on a massive, yet personalized scale, could overwhelm human discourse online and toxify the information environment.
AI chatbots are increasingly capable of engaging in robust conversations about complex topics. For example, Microsoft’s Mandarin language AI chatbot ‘Xiaoice’ has sophistication, empathy and conversational flexibility that make ‘her’ extremely popular. Xiaoice has 20 million registered users, average users interact with her 60 times a month, and she was ranked as Weibo’s top influencer in 2015. She averages 23 exchanges per user interaction. That’s not trivial experimentation; it’s a conversation. Some users relate intimately to Xiaoice and consider her an always-available friend and confidant.
Currently Xiaoice requires a team of engineers to achieve this level of sophistication. This level of technology is well within the capabilities of a corporation or nation-state, but still unavailable to the masses. However, like all digital technology, it will improve in capability and accessibility. Over the next several years, high-end chatbots like Xiaoice will become indistinguishable from humans in a broad range of conversations. When the technology proliferates, AIs will converse fluidly with humans on platforms ranging from social media apps to news discussion boards to dating sites, about a wide variety of topics.
Chatbots have the potential to emulate the dead, including historical figures, as well as the living. In 2016, the CEO of Luka, an AI company, virtually resurrected a deceased friend, teaching an AI chatbot to speak like him using old text messages as a training data set. Luka also trained a chatbot to emulate the artist Prince discussing his music. These tools are relatively rudimentary, but scientists are working on more complex systems. Dr. Hossein Rahnama of Ryerson University and the MIT Media Lab is creating an AI platform for ‘augmented eternity.’ By training an AI with a large dataset from an individual — writings, photos, video, emails, instant messages and the like — the AI can learn and replicate how that person thinks and acts. The more data available, the higher the accuracy. Soon we will have chatbots that can emulate the conversational style and reasoning of people ranging from William Shakespeare to Henry Kissinger.
AI tools are also improving at dynamically generating unique content. AIs could soon be developing custom propaganda, fake news and persuasive arguments. Currently humans develop content for computational propaganda which is then distributed by bots. AI tools are already capable of generating bespoke content, like news articles and novels, using predefined parameters. The quality of this content will improve and AIs will be able to communicate across more subjects with greater sophistication. Emerging debating technologies will allow AI chatbots to persuasively argue by analyzing a corpus of knowledge, determining pro and con arguments, and creating dynamic, persuasive content in support of a position.
AI tools are increasingly sophisticated at affective computing, one aspect of which is determining human emotional states from text, facial expressions and vocal patterns. This will allow machines to interpret whether you are happy, sad, anxious, relaxed or open to a communication when they interact with you. AI tools can then tailor their communication to your mood with just the right amount of emotional emphasis to achieve the desired effect. If an affective AI tool detects that the target is impatient and doesn’t feel like conversing at the moment, the AI can cease communication and try messaging them later when they are more persuadable. If a target is curious and wants to talk politics, the AI will detect openness in their communications and can engage them in a lively conversation (or argument). If the AI detects emotional vulnerability, it could prey on those emotions to persuade, manipulate, or intimidate.
In another twist on affective computing, scientists are training AIs to accurately emulate human emotions in the facial expressions of avatars. This will be useful for generating custom, persuasive video, but the technology can also be used to alter reality and generate disinformation. Researchers at Stanford have developed real-time facial reenactment tools that allow users to take existing videos — like a speech by a world leader — and realistically manipulate the speaker’s facial expressions. The resulting videos show realistic — if not yet perfect — manipulations of the speaker’s face and mouth. Concatenative speech synthesis, or better yet, voice conversion technologies like Google Deep Mind will allow machines to replicate anyone’s voice from samples. If combined with affective computing, facial re-enactment tools and an AI chatbot, this gives propagandists the capability to create videos of anyone saying anything, or more insidiously, to subtly modify existing video for propaganda or disinformation purposes. Affective computing allows the emotional inflection of an altered human speaker or a dynamic AI MADCOM to be precisely tailored to achieve the desired influential outcome.
Big data combined with machine learning tools will enhance the ability of MADCOMs to influence people through highly personalized propaganda. In the United States alone there are several thousand data brokers. One company, Acxiom, claims to have an average of 1500 pieces of information on over 200 million Americans. Another company, Cambridge Analytica, claims to have 3000–5000 data points per individual and psychological profiles on 230 million U.S. adults. We give away our data when we shop using supermarket club cards, when we browse the Internet, when we take ‘fun’ Facebook personality tests, and through hundreds of other seemingly innocuous activities. The spread of Internet of Things devices means a proliferation in the amount of data that could be captured about our lives. Virtual reality will give others the opportunity to test our actual reactions to hypothetical stimuli and to measure our responses to products and ideas subtly introduced into the background of virtual experiences. Data breaches from online retailers and government databases have exposed extremely private information about us and our associates. And we increasingly volunteer our most intimate details online, posting photos of family vacations and tweeting our opinions. AI tools could use all of this information to tailor persuasive, distracting, or intimidating speech towards individuals based on their unique personality and background.
Human cognition is a complex system, and AIs are very good at decoding complex systems. When provided rich databases of information about us, machines will know our personalities, wants, needs, annoyances and fears better than we know them ourselves. Machines will know how to influence people who share our traits, but they will also know us personally and intimately. The communications generated by AI MADCOMs won’t be mass media, they will be custom tailored to speak to an individual’s political frame, worldview and psychological needs and vulnerabilities.
If the AI MADCOM has been instructed to intimidate you, it will do so efficiently and relentlessly. The AI tool will scan social media, websites and private and public records databases to determine your age, address, family members, arrest history, possible mental illnesses, propensity towards illegal behavior and a plethora of other readily available information. It will use this information to build a detailed profile that it will use to generate precision guided messaging. If the tool is operated by a state-sponsored entity, it may have access to extremely sensitive information, like the detailed background data stolen in the hack of the Office of Personnel Management or information from financial institution data breaches. The machines’ communications may be invisible to anyone but you, allowing for subtle influence. ‘Dark post’ technology used by companies like Facebook allows the placement of content, like political ads, which are only viewable by one user. However, if you’re a target for intimidation, it’s far more likely that a machine will use multiple online accounts to frighten you with a mob of angry speech.
Because AIs are learning systems, they improve rapidly with experience. An AI could autonomously determine which of its thousands of pieces of propaganda, disinformation or intimidation are most effective and emphasize or evolve those, while quickly ending failing campaigns. AI tools will test target weak points and learn what provokes the desired emotional response. By probing with multiple accounts and messages, an AI could learn that personal threats to an particular journalist provoke little response, but threats to their loved ones provoke fear. So the MADCOM AI could pose as a local member of a hate group who threatens the journalist’s children until they stop reporting. And while that journalist might not be troubled by abuse from one MADCOM troll, an onslaught of threats from thousands of AI-driven accounts, most of which look and speak like people in their community, would significantly escalate the effectiveness of the campaign.
Digital tools have tremendous advantages over humans. Once an organization creates a sophisticated AI chatbot, the marginal cost of running that tool on thousands or millions of user accounts is relatively low. Since machines are not limited by human temporal constraints, they can operate 24/7/365 and respond to events almost immediately. Once an AI is trained to understand a subject domain, it can be programmed to react to certain events with content at machine speed, shaping the narrative almost immediately. AI tools will know key influencers and populations with personality profiles or political inclinations that are susceptible to their messages. The AI systems will target additional vulnerable users with dynamically generated communications instantly and in real time as events unfold. This is critical in an information environment where the news cycle is continually squeezed into smaller and smaller windows. Often, the first story to circulate is the only one that people recall, and research demonstrates that once a fake news story is believed, it is very difficult to change people’s minds, even when presented with compelling contrary evidence.
How can journalists, diplomats, public relations staff, politicians, news anchors, government officials, and humanity in general ever hope to compete with AI MADCOMs that can interpret and react to stories almost instantly, developing and deploying customized communications personalized to individuals and groups before humans can even begin a first draft? How can a government press release, or a carefully crafted, researched and fact checked news article, or a corporate public relations campaign, precisely developed over months, ever compete with real time, personalized, always available, dynamically generated, instantaneous, machine-driven manipulative speech, text, video and other content?
The answer is: the humans can’t compete alone. On digital networks, only humans backed by machines can compete with machines. The rise of AI-driven MADCOMs will spur an information arms race as empowered individuals, NGOs, corporations and governments all strive to be shape narratives around events. The ‘bad guys’ will have their MADCOM AIs, and the ‘good guys’ will have their own. Everyone will have AIs that try to identify adversary MADCOM accounts. These attribution AIs will be used to anticipate computational propaganda campaigns, respond to ongoing operations and differentiate human users from machine users. Similar to the cybersecurity struggle, the Internet will be the battleground for a continual cycle of one-upmanship as technologists improve AI detection tools, and propagandists improve AI MADCOMs to avoid detection.
The most sophisticated machine accounts will be nearly indistinguishable from the human accounts. But many propagandists may not bother with detection tools since there’s little marginal cost to spamming machines and people with speech and content. So in a bizarre twist, machines will frequently run their information campaigns against other machines. Thosetargeted machine-driven accounts will respond with their own communications, and the online information space be swamped with machines arguing with machines. MADCOMs will overwhelm human-generated speech and communication online.
This is the mad, mad, mad, MADCOM world we are moving into.
The questions we must address are: when the online information environment is dominated by machine-driven communications, where is the room for open, honest, democratic dialogue online? How can MADCOMs be used for positive political purposes? And what can we do to counteract the negative impacts of MADCOMs?
This essay is part one of an exploration into the future of computational propaganda, MADCOMs and AI. Future essays will explore scenarios ranging from MADCOM-lite, to a perpetual MADCOM arms race, to pure information dystopia. They will explore the implications for individuals, organizations, governments and society. And they will analyze both opportunities for utilizing MADCOMs for positive political purposes and potential solutions for mitigating MADCOM risks.

The machines are here and they want to have a word with us. Our level of preparation for this emerging reality will determine the fate of the Internet, our society, and our democracy.

(note: this is a draft paper and feedback in the comments is welcomed!)
~
About the author: Matt Chessen is a career diplomat with the United States Department of State and part-time author. He is currently serving as the State Department Science, Technology and Foreign Policy Fellow at the Center for International Science and Technology Policy at George Washington University. Matt is researching the international and foreign policy implications of artificial intelligence, and how organizations like the State Department manage emerging technology. In the summer of 2017, he will join the Office of the Science and Technology Advisor to the Secretary of State where he will focus on emerging technology policy and sci-tech diplomacy.



3 comments:

Anonymous said...

Thanks for sharing this information. I really like your blog post very much. You have really shared a informative and interesting blog post with people.. chatbot ecommerce

Aishwarya said...

I really like your technique of writing a blog. I book marked it to my bookmark site list and will be checking back in the near future. Take a look at my website as well and let me know your opinion.
Chatbot Company in India
Chatbot Development Company in India
Chatbot Development Company in Chennai
Chatbot in Chennai

Monica MS said...

I’ve read this post and if I could I want to suggest you few interesting things or suggestions.You can write next articles referring to this article. I desire to read even more things about it..
Chatbot Company in Dubai
Chatbot Companies in Dubai
Chatbot Development
AI Chatbot Development
Chatbot Companies in UAE
Chatbot Company in Chennai
Chatbot Company in Mumbai
Chatbot Company in Delhi
Chatbot Development Companies