The robot uprising has begun! Between Google and Facebook’s AI research programs, which are progressing at an amazing (and disturbing) rate we can reasonably expect to see AI’s capable of the fuzzy thinking that’s been humanity’s claim to fame. AI has been a major focus of the private, and government, sector for years. Computers are extremely good at literal logic, their ability to solve an equation or work with a known data is unparalleled. A computer is exponentially faster than the human mind a calculations and working with data, but that’s all. Computers have only, very, recently been able to push into fuzzy thinking. Humans can draw conclusions that aren’t readily apparent, we can make so called “leaps of logic” that require us to put together data in a novel way. That’s how we learn when we’re young as well, and how we continue to adapt as we grow. Computers are incapable of doing so, they’re limited by their programming. The Google and Facebook AI’s are dedicated to teaching themselves how to learn, they’re given access to their code so that they can rewrite themselves. Facebook shut down, and restricted, their AI once it taught itself a language that they couldn’t understand and wouldn’t share it with us. Google, however, has allowed their AI to create smaller AI’s to help it grow faster. Its developed several languages that we can’t understand, and is writing code that its creators don’t know the purpose of.
As AI’s advance ever forward in their complexity and intelligence, we will eventually have to deal with the consequences of a sentient machine. For now though, commercial AI’s are limited by their programming. AI’s are used by companies to perform repetitive tasks with small variations in the parameters, such as data scraping. LinkedIn recently lost a court case against a company that was using an AI to scrape (collect) all the publicly available data it could from their website. The AI is smart enough to be given a target such as “Every engineer who graduated in the last five years, from a college in Florida” and adapt its collection targets based on that.
The crux of the argument for why this process is legal, and not an illegal collection of data is that what was taken is publicly available. Technically speaking a very dedicated intern could have sat down and hand copied all of this data over the course of their entire life, and that would have been perfectly legal. LinkedIn takes exception to it because this AI did the same thing in a fraction of the time. LinkedIn felt that because they hosted the data, others are not allowed to collect it. However, the presiding judge ruled that because the Data wasn’t created by LinkedIn, they can’t claim ownership over it. LinkedIn users also agree that LinkedIn can put their data up publicly, and use it as they want, so they don’t have the right to claim ownership over it either. With the lack of a strong ownership claim, the judge ruled that the AI was not in violation of any law.
Max is a Legal Assistant and author residing in the Philadelphia area He has been writing for AskCyberSecurity.com since early 2017.