Today, March 4, 2016, Google has confirmed that it shall get rid of some search results from all of its websites beginning next week (March 11, 2016) for searches that are conducted within the confines of the European Union Nations. This comes on the back of a 2014 ruling on a European Union Supreme Court that decided Google is now hereby obligated to comply with requests to remove a certain amount of their search results placed forward by European complainants in a decision that has come to be know as the “right to be forgotten.” Per the stipulations of the ruling European internet users can put forth or submit a formal request to Google, that they deist the results that are no longer seen as having significance or reliance to their immediate search reliance and are for all intensive purposes deemed outdated.
This is significant, because up until this point Google would only delist such results for a European Union site, per the request of the user. However, the information could still be accessed by private industries through several easily circumvented obstructions which effectively undermines the “removal” in the first place.
Googles offical EU spokesperson released a statement, “starting next week, in addition to our existing practice, we will also use geolocation signals ( like IP addresses) to restrict access to the delisted URL on all google Search domains, including google. com, when accessed from the country of the person requesting the removal… We’ll apply the change retrospectively, to all delisting that we have already done under teh European court ruling.”
Although this is a big step in terms of internet privacy and security in general, the looming problem in light of all of this remains that Google, if left to its own devices would see no value in enacting such restrictions upon themselves. Essentially, it were not until they were literally ordered by a court through a long drawn out legal battle that they are required to do so. as we can see no such law exists in other countries such as the US or Japan.
That said it may be the catalyst to achieving a more private and respectful relationship between tech giants and their uses, however, I wouldn’t hold my breath. The very notion of privacy has changed more in the past 5 years than the proceeding 5,000. For many the it seems all to obvious that the Google and Facebooks of the world should adhere to a higher code of ethics, and not merely when ordered to by threat of law. But for many, primarily of the younger end of the millenial or post minileal class is that they simply do not care, that access to their intimate and person internet data by someone does not give them pause. whether or not one is right or more ethical remains to be seen, or perhaps it doesn’t. it maybe that this highly subjective notion has no more grounding in reality than the simply notion of something being good, or having quality in and of itself.
In 2016, big data spending and the growth rate of investment in the big data industry is expected to hit an all-time high.
According to public sector IT distributor immixGroup, civilian big data spending could max out as high as $2 billion this coming year. Coupled with the defense sector, an annual total of $3.6 billion could be spent on big data by the US alone.
“Big data” generally refers to extremely large data sets that are too massive to be run through traditional data processing applications. With the right IT equipment, the data can be analyzed computationally to reveal useful trends and associations, especially in business/human behavioral contexts.
Big data is commonly associated with web behavior and social network interactions, but more traditional data (regarding product transactions, financial records, etc.) also falls into the category of big data.
Big data can also be sorted into unstructured and multi-structured data.
Unstructured data derives its name from its disorganized nature and the inherent difficulty of inputting it into any pre-defined model. Often unstructured data comes in the form of text, whether it be metadata or a Twitter tweet.
Multi-structured data comes in many forms and is created by non-transactional systems such as machines, sensors, and customer interaction streams. The variety of the systems from which it is created makes it extremely expensive and difficult to use.
In 2013, Gartner estimated that 85% of the Fortune 500 will remain unable to exploit big data through 2015. The current velocity at which data is created and the diversity of its many forms present an unprecedented problem for businesses and government agencies alike. The information is there, and it is useful, but how to sort through all of it remains a largely un-cracked code.
That said, the U.S. federal government is on the case. The amount of information that federal agencies must collect, store, process and manage is, of course, unimaginably large, and due to the government’s surprising disparity of big data talent, the private sector is getting involved.
Global IT company Unisys Federal recently sponsored a survey that found that 46% of respondents (all IT managers of US federal agencies) “plan to increase their use of third-party consultants and contractors for big data initiatives in the coming year.”
Considering the U.S. federal government now needs to take charge of exabytes (bytes in the quintillions) of data, those 46% might be onto something.
The movement towards implementing big data analysis has actually been a two-steps-forward, one-step-back type progression.
In 2012, federal spending attributed to big data topped out at $832 million. By 2013, that number had fallen to $693 million. 2014 saw gains again, with federal spending launching up to $1 billion.
Some analysts believe that big data for private and public sector use could reach as high as $3.9 billion by 2017 and $4.2 billion by 2018.
Big data analysis has a lot of ground to cover, but clearly money and expertise (at least in the private sector) is not in short supply.