JavaScript – the Internet’s favourite language

JavaScript – the Internet’s favourite language

According to the 2018 Stack Overflow report, JavaScript is now the number one programming language in the world (see images below).

wp-20190109-javascript-so-2018-all-respondents
wp-20190109-javascript-so-2018-professional-devs

Jeff Atwood, the American software developer, author, blogger, and founder of Stack Overflow, proposed Atwood’s Law, which states that “Any application that can be written in JavaScript, will eventually be written in JavaScript.”

This is quite an ambitious statement to make so, in this post I will give a short overview of JavaScript’s origin and how we arrived at this point.

Why JavaScript?

JavaScript’s popularity is entirely due to its role as the scripting language of the World Wide Web (WWW).

It is the third ingredient in the “layer cake” of core web technologies – the other two being HTML and CSS. While HTML is responsible for giving structure to a web page and CSS for the styling, JavaScript is used for pretty much everything else – from updating the content, to displaying animation and managing user interaction.

When you open a web page that contains JavaScript, the code is executed by your browser’s built in JavaScript runtime engine.

The true power of JavaScript, however, lays in the Application Programming Interface (API) functionality built on top of the core JavaScript language. This is really what allows it to interact with, and manipulate web pages in order to make it more than just a screen of boring, static text.

A Quick History

JavaScript has its origins in Netscape.

But the story begins a bit earlier, in 1961, when the EMCA was founded. The purpose of the ECMA (European Computer Manufacturing Association) was to standardize computer systems across Europe. Over time they defined standards for Floppy Disks, FAT filesystem, C++ as well as a scripting language called ECMAScript.

In 1993, the National Centre for Supercomputing Applications (NCSA), associated with the University of Illinois, released the NCSA Mosaic – one of the first popular graphical web browsers.

In 1994, some of the formerly employed NCSA developers founded a new company called Mosaic NetScape with the intention of creating the world’s number one web browser. The code name of this project was Mozilla (short-hand for Mosaic Killer). Upon release, their new browser was so successful that it immediately took the bulk of the market share. In order to avoid any possible trademark problems with the NCSA, it was renamed to NetScape Navigator and the company to NetScape Communications.

NetScape Communications wanted their web browser to be more dynamic and in 1995 they recruited Brendan Eich with the aim of embedding the Scheme programming language. During the same time, NetScape also collaborated with Sun Microsystems in order to leverage Sun’s Java programming language to gain the upper hand on their new challenger, Microsoft. It was decided that NetScape’s new scripting language should complement Java and have a similar syntax. Eich immediately started with the development of a prototype – which he completed within 10 days.

His scripting language, which he called “Mocha”, was based on the ECMAScript standard. It was loosely typed, unlike Java or C++, and were interpreted by a runtime engine, not pre-compiled.

In September 1995, this new scripting language was shipped with Netscape Navigator 2.0 under the name “LiveScript”. It was finally renamed to “JavaScript” and released under this name with NetScape Navigator Beta 3 in December 1995.

“JavaScript” should not be confused with the Java, although they share some syntax features. But since Java was so popular at that time and due to Sun MicroSystem’s influence, “JavaScript” was specifically chosen to be nothing more than a PR stunt!

In July 1996, Microsoft released Internet Explorer 3.0 which contained an extension of JavaScript called “JScript”. This was the start of the painful period which resulted in the fragmentation of the two web browsers. Certain content would display best on Internet Explorer, or not at all and vice versa…

In June 1997, ECMA introduced the ECMA-262 specification which was the standard for ECMAScript on which JavaScript is based. This was a huge step into the right direction of web browser standardization. In December 1999, the 3rd edition of the ECMA specification was published.

NetScape Navigator was starting to lose the browser war, mainly because Microsoft had the monopoly on the Windows operating system that came pre-loaded with Internet Explorer. In 1998 Brendan Eich co-founded the Mozilla project with the intention of managing the open-source contributions to the NetScape source code. However, NetScape was bought out by AOL in 1999 and by 2003, the NetScape Navigator browser was officially declared dead… But the Mozilla community endured and in November 2004, they released an entirely new web browser called FireFox, that continued to use JavaScript.

Two major breakthroughs also occurred in the next couple of years that solidified JavaScript’s stronghold as the de facto scripting language of the web.

The first happened in February 2005, when Jesse James Garret coined the term “Ajax” in his whitepaper for a technology used by new emerging web services. Ajax (an acronym for Asynchronous JavaScript and XML) allowed JavaScript to load web pages or even sections within a web page asynchronously via XML HTP requests. (Previously the whole web page had to be loaded).

The second was in August 2006 when the JQuery library for JavaScript was released. It bundled many development features together, including Ajax functionality, and became the most popular JavaScript library ever.

In September 2008, Google released their new Chrome web browser as well as their new JavaScript engine, named “V8”. The popularity of Chrome helped to establish JavaScript as the undisputed scripting language of the web. The next year Node.js was released which utilized Google’s V8 engine. (Node.js is an open-source runtime environment that revolutionized many new technologies for server-side execution of JavaScript).

In December 2009 the ECMA 3.1 specification was released and subsequently renamed to version 5. This release also included the formal support for JSON (JavaScript Object Notation) which is hugely popular today in favour of XML.

As of 2012, all modern browsers fully supported ECMA 5.1.

In June 2015 the ECMA 6 specification (commonly referred to as ES6) was published with a major overhaul of the JavaScript language. This release transformed it into the language most developers and users work with and love today.

The Critics…

As with all things in life, JavaScript also have its critics. They whine about the fact that JavaScript is a loosely typed, prototyped- based language with a lack of variable block scoping etc, etc. In most cases it is just criticism against unfamiliar syntax used by other favourite languages.

In fact, JavaScript can function as both a procedural and an object-oriented language. Object oriented languages define classes with properties (for example “Employee”) and then create instances of that class (for example “John”). An instance has exactly the same properties as its parent class (no more, no less).

JavaScript simply has objects. They can specify their own properties when created or even add new properties during runtime. Each object can be the prototype for another object that share all the existing properties.

Some purists also say that JavaScript is too simple but sometimes simplicity can be confused with “easy to understand”. JavaScript can be just as difficult to master than any other programming language, but it is exactly the “simplicity” (i.e. the non-conformance to object-oriented programming languages) that makes it so powerful and adaptable to the web environment.

Atwood’s Law

Let’s go back to Jeff Atwood’s Law. He proposed this law on the Coding Horror Blog in response to an article by Tim Berners-Lee called “The Principle of Least Power”.

In short, he stated that nowadays, computer scientists and programmers do not pick a language for being the most powerful solution but rather for being the least powerful. “The less powerful the language, the more you can do with the data stored in that language”.

He continues to give an example of a weather information website. If the page presents the data using RDF design (Resource Description Framework), other users could retrieve that information as a table and incorporate it in their own applications or Power BI visualizations, for example. On the other side of the spectrum, if the application was a Java app, a search engine finding the page will have no idea what the page is about or what the data is. Only a user physically interacting with the application will understand what the purpose is.

The last say

Finally, let’s look at the Stack Overflow report again. 69.8% of all respondents and a staggering 71.5% of all professional developers indicated that they have been using JavaScript within the last year. These are not just hobbyist or people tinkering with it in their spare time. These are the people that are responsible for developing the world’s e-commerce, financial and social media web sites.

All new scholars and graduates should have a basic knowledge of JavaScript if they want to be taken seriously in the mainstream development community. Your job might literally be dependent on it as well!

Yes, JavaScript might have some funny quirks or features that purists would frown upon. No matter where you stand on your opinion towards this language, just as the size on the Internet has expanded over the last decade, JavaScript will continue to grow in size and popularity.

 

Other sources:

Jeff Atwood (https://en.wikipedia.org/wiki/Jeff_Atwood)

The Rule of Least Power (https://www.w3.org/2001/tag/doc/leastPower.html)

JavaScript Engine (https://en.wikipedia.org/wiki/JavaScript_engine)

What is JavaScript (https://developer.mozilla.org/en-US/docs/Learn/JavaScript/First_steps/What_is_JavaScript)

The Liberty Life Hack

The Liberty Life Hack

Late on the 16th of June 2018 (which also happened to be a public holiday in South Africa) I received a very concerning SMS from Liberty Life on my cell phone:

“Dear Valued Customer, Liberty regrets to inform you that it has been subjected to unauthorized access to its IT infrastructure, by an external party who requested compensation for it. Since becoming aware – we have taken immediate steps to secure our computer systems and are investigating the incident.”

WP-20180929-Liberty-SMS

First of all, kudos to Liberty (a South African Financial Services company) for being so up-front and informing their customers about the incident. I have written previously about companies that were not open when they got hacked, or even worse, did not even know what was going on!
Admittedly, it must be said that it took two days for them to take this step, but I suppose any organization would first assess the situation with the main priority to stop the leak of any further information.

Whoever was taking control of the Liberty situation must have been aware of the fact that it can do more harm to a company’s reputation when you try to hide a breach of this nature. Instead, they engaged with their customers from the beginning with many follow-up notifications over the next couple of days.

South African Cyber Attacks

Now, this was not the first time a South African company have been compromised. Two major leaks occurred recently – both reported by the Australian security expert and Microsoft MVP, Troy Hunt. The first was the last year’s Master Deeds leak where personal information of millions of South Africans were leaked by a property company, Jigsaw Holdings. And recently there was the breach of the online traffic fines site, ViewFines.

The difference between the Liberty hack, however, was the element of extortion. Apparently the hackers wanted “millions” from the company to avoid the release of “critical information” belonging to “top clients”.

After doing some investigation, it turned out that the person taking control of the Liberty situation was indeed their (new) CEO, Mr. David Munro. In an official response, the Sunday Times quoted Munro saying that “the data that was affected by the breach consisted largely of recent emails from the company’s mailing service. He said the company was in the process of investigating the breach, saying the findings of such an investigation would be referred to the authorities”.

Munro also said Liberty assembled a huge team of technology and security specialists with world class skills and experience in assisting organizations affected by such breaches.

Previous attacks

Liberty has in the past also been involved in a ‘419’ scam. These scams are typically in the form of an email (or in this case a SMS) send to a recipient from someone pretending to be a legitimate entity making an offer that would result in the recipient willingly transfer money into the scammer’s bank account. (The ‘419’ refers to the section of the Nigerian Criminal Code dealing with fraud. The fact that these type of schemes are commonly associated with Nigeria is probably because many of them promise riches from some Nigerian prince!)

On 25 January 2018, Liberty posted the following on their website:

“Please do not respond to a fraudulent SMS that you may have received about a Liberty Quick Loan as this is not part of our product offering. Please delete the SMS and do not share your personal information.”

Was it possible that the same scammers responsible for the ‘419’ were also responsible for the extortion? Did they somehow managed to get enough inside information via the SMS-scam in order to pull off the hack?

Blame games

Some commentators said the fact that Liberty took 2 days to inform their clients suggested that it did not have a strong-enough focus on its IT systems. In fact, a senior IT executive with more than a decade of experience working with Standard Bank anonymously said that “Nobody [at Liberty] takes IT seriously and then this is what happens”. (Liberty is 53.6%-owned by Standard Bank).

According to research done two years ago by World Wide Worx, they found that “half of IT decision makers in SA corporations believed their organizations were vulnerable to a cyberattack”. More alarmingly, VMware research also reports that 1 in 10 companies would not know that they have been breached within the first 24 hours.

New CEO

Liberty communicated personally with all their clients throughout the initial crisis. Follow-up SMS’s were send on the 17th and 19th of June as well as on the 6th of July. Apart from keeping them informed about the criminal investigation, they also send messages regarding vigilance in terms of phishing, password strengths and so on.

But since then, information has not been so forthcoming.

Later, as I was looking at some news articles on Liberty’s website again, I came across a very prominent press release about the company announcing a new CEO. This new CEO turned out to be none other than Mr. Munro who was dealing with the hack.

Apparently Mr. Munro, previously a Non-executive Director, replaced Mr. Thabo Dloti who was leaving due to “a difference of opinion with the Board on the immediate focus of the company at a time when the organization is facing tough operational and environmental challenges.”

“Mr. Dloti believes that given this environment, alignment among key stakeholders is imperative to ensure the effective execution of the strategy required to drive the company forward. This alignment, coupled with the ability to act decisively, is in the best interests of the company and hence Mr. Dloti is stepping aside.”

It is not uncommon for CEO’s to willingly resign from their companies, but what really caught my attention was the date of the press release: 16 July 2018 – barely a month after the cyber-attack. So Liberty’s ‘new’ CEO was indeed very new and these events occurred during the transition period from Mr. Dloti to Mr. Munro.

Now, the purpose of this post is by no means intended to fuel any conspiracy theories, but it is very suspicious that the hack occurred at that specific time.

Was any of the two CEO’s logon credentials exposed during this period that allowed hackers to gain access to the mail system? For example, an IT employee receiving an email from the CEO might be obliged to respond if the email originates from within the company.

Or was it the work of some disgruntled employee that would lose some political influence (or income) with the resignation of Mr. Dloti?

Or was there more involved with the “difference of opinion with the Board” than anyone cared to admit?

Or did the hackers knew that the company could be vulnerable during this time?

Or was it simply a co-incidence that the events unfolded like it did?

Ongoing investigation

In my previous post, I have written about the South African version of the GDPR called the Protection of Personal Information Act (POPIA). Although the act makes provision for legal sanctions against firms that are found not to have proper security in place to safeguard customers’ personal data, it is not always properly governed and implemented.

The Liberty data breach is still under criminal investigation.

In conclusion, the reason for writing this post was two-fold. Firstly: it is a given that these type of events will occur in the future and there is probably no company that can claim to be safe. Management problems and human deficiencies will always present loop-holes for hackers to exploit.

Secondly: How a company publically deals with the situation can say a lot about the managements and leadership within that company… and it might just determine how loyal their customers really are.

GDPR and South Africa

GDPR and South Africa

During the last couple of months information about the European Union’s new General Data Protection Regulation (GDPR) have dominated professional networks, blogs, websites and emails.
Although it has been a law for a while now, the 25th of May 2018 marked the day when businesses and entities could now be fined for non-compliance.
Even though the GDPR does not directly affect most people living on the southern tip of Africa, people subscribing to websites and services that have to comply with GDPR received emails like this:

WP-20180606 GDPR Updates

As someone involved in the software industry, I can just imagine all the millions of man-hours that were burned by architects, developers, consultants, database administrators, business analysts and marketing departments from thousands of businesses to make this a reality.

GDPR in a nutshell

The main thing to know about GDPR is that it now gives users the:
• Right to permanently delete accounts that gathers personal information,
• Right to object how your data is used (as well as the right to opt out)
• Right to access your personal information
• Right to amend your personal information
Companies must also:
• Be transparent in how the data will be used
• Be able to export your data (even to other companies or third party entities) should you request it

Facebook data leak

GDPR is really good news for consumers. It kicks in barely a month after the now infamous leak of Facebook users’ data by the UK-based company, Cambridge Analytic. Personal information gathered from millions of Facebook users were used to potentially influence the 2016 US Presidential Election as well as Brexit (the UK’s referendum to leave the European Union).

The Facebook leak showed just how vulnerable (and sought-after) your personal data is and how little control you really have over it.

POPI and PAIA

In South Africa we have the Protection of Personal Information Act (POPI or POPIA) and Promotion of Access to Information Act (PAIA). These laws are supposed to ensure that consumers and companies conduct themselves in a responsible manner when collecting, storing and using personal data.

According to the South African Government Gazette, the purpose of POPI is to “…regulate, in harmony with international standards, the processing of personal information by public and private bodies…”
These bodies may not process personal information concerning “… the religious or philosophical beliefs, race or ethnic origin, trade union membership, political persuasion, health or sex life or biometric information of a data subject…”

Although it all sounds good on paper, it covers a very broad spectrum and may not be specific enough to ensure peoples’ safety on the Internet. As we have seen with the Facebook incident, the definition of personal data and personal data custody is murky when it comes to information that a user is required to give (name, surname, email address) and information that is uploaded freely (photos of holidays, loved ones or children).

One of the most visible outcomes of POPI for consumers is the protection against direct marketing unless you have specifically given consent (opt-in). However, companies are not forced to delete your personal data as with GDPR. For example, Takealot, South Africa’s biggest on-line shopping website, offers no direct way for customers to completely remove their account. And even if a service offers this feature, can you really be sure that all your historical data is also deleted in the process?

And as with most laws it as just good as the enforcement thereof. The sensitivity of the data also varies. Your medical data in the wrong hands can be much more harmful than your DVD-rental history. How much money will someone be willing to spend in order to convict and prosecute the DVD shop vs. the medical aid company?

PAIA in action

The most famous South African case that involved PAIA was the so-called “spy tapes saga” where the investigation spearheaded by the opposition party, the Democratic Alliance, turned to the courts in order to gain access to these tapes. They allegedly contained information of political interference in the corruption charges against (former) president Jacob Zuma. The allegations about interference were the main reason that the National Prosecuting Authority (NPA) originally dropped the charges against Zuma.
Although the court case dragged on for years, it proofed that nobody is above the law and that PAIA can indeed be used as an effective investigative and prosecuting tool.

But although POPI and PAIA can be effective against South African citizens and companies, dealing with international privacy laws can be a different ball game. Take the Oscar Pistorius murder case for example. (Pistorius is the double amputee Paralympics athlete known as the “Blade Runner”).

Police investigating the case wanted access to his iPhone because it was used on the night of the murder of Reeva Steenkamp and they believed it contained vital information. But Oscar conveniently “forgot” the 4-digit passcode necessary to unlock the phone. The South African investigators were forced to seek help from the FBI in America in order to authorise Apple to unlock the phone.
The FBI was accused of dragging their heels in approving the request for help. They apparently demanded to see the original versions of the documents that had been signed off by the South African magistrate and director of public prosecutions. A team of South African officials eventually had to fly to California in order to request help from Apple directly.

Both these two cases evoked strong opinions with the general public – for and against. Although it is a no-brainer that access to private information is necessary to investigate corruption, murder and even terrorism for that matter, the law also state that people are innocent until proven guilty. It is in the best interest of everyone that the legal process has to take its course. Under no circumstance should we allow that individuals or states have such power as to have free access to our private information.

Step in the right direction

GDPR, POPI and PAIA are all steps in the right direction. Individuals should have more control over their personal data and companies must be held accountable for the way the use, store and share it. Although the South African laws are by no means perfect and might not be of the same caliber as GDPR, it is better than to have nothing at all. But with the fast pace of technological change these days, any law that deals with privacy issues will constantly have to adapt in order to stay relevant.

 

Can self-learning AI chatbots be dangerous?

Can self-learning AI chatbots be dangerous?

Recently a report from the Facebook Artificial Intelligence Research lab (FAIR) raised quite a lot of eyebrows. Apparently artificially intelligent (AI) ‘chatbots’ using machine learning algorithms were allowed to communicate with each other in an attempt to converse better with human beings. The social media company’s experiment started off well enough, but then the researchers discovered that the bots were beginning to deviate from the scripted norms. At first they thought that a programming error occurred, but on closer inspection they discovered that the bots were developing a language of their own.

In 2016, an AI Twitter chatbot developed by Microsoft called “Tay” caused quite a lot of  embarrassment for the company when Internet trolls started to teach the bot to respond with racial messages on user questions. The aim of this experiment was also to develop an artificially intelligent entity that would be able to communicate with human beings, learn from the experience and get smarter in the process.

Press ‘4’ to wait for the next available operator…

The potential market for these AI chatbots are huge. Image if you could call your insurance or medical aid company and immediately speak to one of these bots without waiting hours for a human operator. Or navigating through endless recorded messages prompting you to press ‘1’ or ‘2’ to proceed to the next.

Imagine if these bots are able to speak to you in your own language, authenticate your identity with voice recognition and immediately understand the problem that you have. Imagine if these bots could communicate instantly with other bots on the other side of the globe to solve your problem.

This scenario is already becoming a reality, and eventually you would not even know that you are talking to a non-human AI.

Maybe Microsoft was a bit pre-mature in releasing their chatbot technology into the Internet Wild Wild West, but then again, great lessons were learned in the process. In Microsoft’s defense, they did not program the bot to be racists, nor did the bot itself have any concept of what racism means.

Human communication

Any human language (written on spoken) might not be the most efficient way for AI entities to communicate with each other. Take English, for example. There are many words that basically mean the same thing (think vehicle/motor, or petrol/gasoline).

An AI that have to convert these words to bits and bytes to transmit over broadband Internet connections might come to the conclusion that the ones with the least amount of characters are more optimized. So it could tend to favour certain words and/or phrases.

The way that we change words and sentences to indicate tenses might also be strange to an AI. If the sentence “The boy kicks the ball” must be converted to past or future tense, an AI might device a strategy of using the character < for past tense and > for future tense. If this sentence is optimized even further, the AI could simply transmit “Boy kick ball <” or “Boy kick ball >” to indicate that the action happened in the past or the future.

So, this was precisely what the Fabebook bots were beginning to do. Below is a short sample of the new ‘language’ that they developed:

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

I’ve told you so!

When the general public learned about the Facebook incident, the first response was to call this a Skynet event (as per the popular Terminator movie-franchise). Indeed, these potential doomsday scenarios where artificial intelligent entities become self-aware and enslave the human race has been a popular theme of many books and movies over the years (2001: A Space Odyssey, The Matrix, I Robot etc.)

But should we be worried?

Isaac Asimov’s Three Laws of Robotics are usually quoted at this point to ensure people that there is nothing to be afraid of. However, when Asimov developed these laws, he was thinking about human-like robots or androids that would share our living space and do all our chores and dirty work. (The 3 laws are quoted at the bottom of this post).

But today the concept of robots and artificial intelligence has changed dramatically. AI entities might exist purely in a digital state without any physical form. These entities might also be decentralized, being distributed across many data centres or compute nodes – making it impossible to destroy.

The concept of ‘doing harm’ to a human being is also very vague. With social media playing such a big part in most people’s lives, cyber-bullying is just as dangerous as physical harm. Most people doesn’t bother to check the source of news events or posts and are happy to simply forward it to their followers. A malicious AI bot could easily destroy a person’s reputation by associating him/her with racists, harmful or pornographic posts and websites.

Many people have lost their jobs already by something they have tweeted.

Conclusion

Companies like your Googles, Microsofts, IBMs and Amazons (which have the funds to invest in machine learning, neural networks and other artificial intelligence technologies) are ultimately doing it to make and/or save money. I am not saying that they are not thinking about the future consequence of the software that they are developing. (The fact that the deviations of the Facebook and Microsoft bots could be identified and stopped, shows that we are still in control).

My concern is more that there is no common strategy between the different role-players with everyone essentially doing their own thing. And then there are many rogue nations and companies in the world that do not follow the rules in any case.

Chatbots and artificial intelligence are not going away anytime soon. AI will have a huge impact on our lives in the future – for the good. Lives will be saved, sicknesses healed and processes simplified because machines across the world are constantly analysing problems, learning from it and coming up with clever new solutions. But we always need to be wary of the fact that we could be creating systems that might have unexpected results as intended.

 

Asimov’s Three Laws of Robotics are as follows:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Dealing with workplace Automation

Dealing with workplace Automation

Technology is great. What is not exciting about automated vehicles, Amazon packages delivered at your doorstep by drones or supermarkets that do not require a human cashier that have to scan each item individually?

Not a day goes by where there is not a new story about a start-up revolutionizing a process or even a complete industry; new advances in machine-learning and cloud computing or factories run entirely by robots. The unfortunate part is that for each of these success stories, real-life humans beings are losing jobs…

Rapid Population Growth

In 2016 the world’s population exceeded the 7.3 billion mark, meaning that the number of people on earth has doubled in the last 45 years.

The United Nations predicts that the figures will be somewhere in the region on 9.7 billion by the year 2050. At the current rate it would require the equivalent of almost three planet earths to sustain the population if we were to maintain our current lifestyles.

In addition to this alarming population growth, people continue to migrate to urban areas. The United Nations further predicts that by 2050, 66% of the world’s population will be living in cities. In the year 2000, the number was only 47%.

In order to sustain these numbers and the extreme demand on resources, we simply have to do more with less. In practical terms it means that we will continue to turn to science and technology to find better and faster ways to keep up with the demand. And the more we improve, innovate and automate, the less we require human beings to be part of the workforce.

These jobs are lost forever

During the last 20 years, globalization has seen many manufacturing jobs moved from First World countries to relatively poor countries that still had some basic education systems. This has been great in improving the economies of these poorer countries, but the tide is slowly turning for them as well. If one worker (with the help of a robot army) can achieve the same output as 100 factory workers, you simply don’t need to employ these 100 workers.

More production, less employees…

When Apple sells an iPhone, what percentage of the cost of the sale actually go to the labour force that produced it?

Less than 2 percent.

So when politicians throw around popular slogans like “bring back the manufacturing jobs” they are basically talking about the 2 percent labour required to manufacture an iPhone. Or a flat screen television. Or a vehicle.

Breeding ground for revolution

The net result is that millions of people worldwide are seeing local job opportunities diminish, especially in rural areas. With the migration of people to cities and jobs to foreign countries, so too has the hope of a prosperous future vanished.

Many towns and small villages’ survival depend entirely on the surrounding resources and industries – sometimes a single factory. As these industries close down, entire communities are left without income or the means of learning the skills required to work in newer “smarter” factories.

This is unfortunately a breeding ground for revolution and politicians have grasp the negative sentiments towards globalization and automation with open arms. And since a lot of the traditional voting power still lies in these effected rural areas, many politicians have found huge support in promoting anti-globalism, anti-capitalism and closed borders and markets.

Basic Income

So this is the current situation. The real question we need to ask is what we can do about it?

Fortunately for us this question is also on the mind of many wealthy and powerful business leaders. Elon Musk, the billionaire and co-founder of companies like Tesla and SpaceX believes that the answer lies in a universal basic income. This might come to a shock to many capitalists, but it means that each person on the planet receives a certain amount of money each year so that we can keep the economy going and ensure that workers displaced by automation can at least maintain a basic living standard.

But where will the money come from to fund this scheme? Bill Gates, founder of Microsoft and present-day philanthropist propose that we should tax the same companies that are displacing the workers in the first place. Again, this might not please the owners of such companies but is it really fair that only a select few benefit from a system while millions other suffer?

The problem with all these people losing jobs is that it has a rippling effect through the economy. Middle-income workers might belong to a medical aid, go to restaurants once or twice a month or can afford to put their children in private schools. Without an income, these types of luxuries will be the first to be given up in order to make ends meet.

But just think for a moment how many other jobs will eventually be lost in the process – administration clerks working for that medical aid, the groundsmen of the private school, and the kitchen staff of the struggling restaurant. The cuts will come with the “low hanging fruits” first – jobs that could in itself be replaced by robots or simply be dropped in order to pursue profit.

Even low-income workers that support local economies and spaza shops will change their buying and lifestyle habits. And as all these workers lose the ability to contribute to the economy or fall out of tax-paying brackets, where will governments get the money to support their population?

Robot workers don’t pay taxes, don’t belong to medical aids and don’t buy Christmas gifts from local arts and crafts markets. So I think Bill Gates is definitely on to something.

Utopian Society

How will a world look like where robots are doing all the work while humans get paid?

It sure sounds a lot like the plot of some utopian science fiction novel. I personally think that society as a whole will uplift themselves to a higher level because without the daily struggle to provide basic needs, people can pursue other passions. Entrepreneurs and business-minded people will still seek opportunities to make money or provide services but now people that previously had no way of entering the economy suddenly can be part of it. Bill Gates also believes that we could focus more on humanitarian work, looking after our children and their education as well as caring better for the elderly.

Yes, there will always be crime, corruption and other social problems – as there has been throughout history. But there will also be people that seek higher forms of fulfilment like art, music, literary and poetry.

We hold the cards

I have read somewhere that one of the last jobs that will be replaced by robots, will be that of the politician! The irony is the same people that has the power to do something about workplace automation will make sure that their own jobs are protected until the bitter end.

So why not make use of that power for a greater good and start having a serious look at ways to address the issue? We know that the problem will only get worse in the future, so now is the ideal time to come up with a solution.


 

Other sources:

The World Population Prospects: 2015 Revision (http://www.un.org/en/development/desa/publications/world-population-prospects-2015-revision.html)

http://www.un.org/en/development/desa/news/population/world-urbanization-prospects-2014.html

“Those jobs are gone forever. Let’s gear up for what’s next.” (Quincy Larson, Medium, 2017-02-06).

 

 

 

 

Space Travel and Science Fiction

Space Travel and Science Fiction

I have always been a huge fan of science fiction. What really draws me to the genre is the fact that authors and movie-makers are not scared to “challenge our minds”, exploring ideas and concepts that might seem implausible, even ridiculous today. As Robert A. Heinlein puts it: “(Science fiction) is a realistic speculation about possible future events, based solidly on adequate knowledge of the real world.

A popular saying is that “today’s science-fiction is tomorrow’s science fact”. The irony is that there is more truth to this statement than we would care admit. 30 years ago the ‘portable telephone’ and ‘information-highway’ were something of science-fiction. Today there are more cell-phones on the planet than people with the Internet being the number one source of information, communication and entertainment.

For this post, I am investigating some of the science fiction predictions for space travel – from the absurd to the “somewhat-plausible” to the “it-is-only-a-matter-of-time”.

Leaving Terra Firma

To get into space, you first need to overcome the effects of gravity – one of the fundamental forces of the universe that effects all matter at a macroscopic level.

Currently, the only technology available is rockets which is tremendously expensive – so much so that only governments, multi-national consortiums and the richest billionaires on earth are able to fund space projects. Rockets need to generate enough thrust to escape the pull of the earth’s gravity (g = 9.80665 m/s) and have to burn enormous amounts of expensive fuel in the process.

In his 2006 novel, Gradisil, Adam Roberts explores the idea of ordinary people being able to simply “fly” into orbit, setup habitats and live in space – which they call the “Uplands”. In order to do so, regular jet-planes are adapted to use the resistance of the earth’s magnetic field to produce lift. He called this effect, ‘magnetohydrodynamics’ where the plane’s wings becomes giant electro-magnets that cuts into the lines of force of the magnetosphere. The ascent is slow and similar to a seagull or eagle using the wind currents to systematically soar higher and higher.

Whether or not this technology is viable for future space travel is debatable. However, it provides an interesting alternative to current space programs’ investment in rocket technology. As one of the characters in the book explains (rather animatedly): “…because von Brown (the ex-Nazi scientist that eventually spear-headed the NASA space program – editor) was so influential, nobody explored other means of flying to space. When NASA planned to fly a hundred-kilo man into the orbit they could have taken the money they were going to spend on doing that and instead spent it on building a replica of the man in solid gold, that’s what the costs were.”

Space Elevators

Another alternative for getting into orbit is the space elevator. The concept of space elevators were first proposed in 1895 by Konstantin Tsiolkovsky. Arthur C. Clarke introduced it to the wider science fiction audience in his 1979 novel, The Fountains of Paradise.  Robert A. Heinlein (Friday, 1982) and David Gerrold (Jumping Off The Planet, 2000) also featured space elevators that reached into the sky like giant beanstalks.

The dynamics of such an elevator is not hard to grasp. A counter-weight needs to be placed into the earth’s orbit and connected to an anchor point on the surface with a strong cable or tether. The centrifugal force of the earth’s rotation will keep the counter-weight in place (think a bucket of water that you swing over your head without spilling any fluid).

Tsiolkovsky’s vision of the elevator was a bit more ‘anchored’ in his era’s understanding of physics and construction and consisted of a free-standing tower that reached the height of the geostationary orbit (like a giant Eiffel Tower). But today’s thinking is more towards using a very strong but light-weight cable that is capable of carrying its own weight as well as the additional payload that must be lifted. It needs to be about 35km long. The material has not been perfected yet, but scientist are currently working on carbon nanotube and diamond nanothread technologies.

The benefit of such a space elevator is that all the raw materials needed to construct spaceships can be exported from earth where it will be assembled in zero-gravity (by means of robots or humans).

In order to make it economically viable, such an elevator would probably consists of hundreds of cargo pods that move up and down together like a giant Ferris-wheel. It would obviously mean that a journey to the top could take many days as each pod needs to be unloaded at the top while an empty one is loaded at the bottom. But the money saved in relation to burning rocket fuel would be worth the while.

Space Launch

So let’s assume a new spaceship has been assembled outside the earth’s atmosphere or even on the moon. The next step is to launch it towards its destination – an action that also requires some sort of energy transfer.

In Kim Stanley Robinson’s 2015 novel Aurora, a star ship with the same name is launched by an electromagnetic “scissors” field. Two strong magnetic fields held the ship between them and when they were brought across each other, the ship was briefly projected at an accelerative force equal to 10 g’s. (Almost like squeezing a watermelon seed between the fingertips).

In addition to this, a powerful laser beam were concentrated on a capture plate at the stern of the ship’s spine for a period of 60 years, accelerating it to full speed.

But there are other objects in space like planets, moons and even suns whose energy could also be harnessed to propel spacecraft on long interstellar travels. The most famous example is Voyager I.

Voyager I was launched in 1977 and is the first human-made object to cross the heliopause (the boundary of the heliosphere) and enter interstellar space. On his journey to the edge of our solar system, Voyager had a little help from the gravitational fields of Jupiter and Saturn to sling-shot it towards its destination.

The mathematics behind such a manoeuvre is extremely difficult and commonly known as the “three body problem” – a reference to how the gravity of the sun and a planet will influence a third object’s trajectory. A brilliant 25-year-old mathematics graduate called Michael Minovitch solved this problem in 1961 (with the help of an IBM computer) and proved that an object that flew close to a planet could steal some of the planet’s orbital speed, and be accelerated away from the Sun.

Another problem is that the planets are not always in the correct position to execute the sling-shot. The Voyager missions were specifically planned to take advantage of the fact that Jupiter, Saturn, Uranus and Neptune would all be on the same side of the Solar System in the late 1970s. Such an opportunity would not present itself again in another 176 years!

In Aurora, Kim Stanley Robinson also used the inverse of the sling-shot effect to decrease the speed of his spaceship as it re-entered the solar system by using the planets’ gravitational fields for ‘aero-braking’. His ship also had a very intelligent computer on board that was able to make the required calculations during flight.

Bistromathic Drive

In Douglas Adam’s Hitchhiker’s Guide to the Galaxy series, he introduced us to the Bistromathic Drive which operates on a revolutionary way of understanding the behaviour of numbers. Just as Einstein observed that space and time was not absolute but depends on the observer’s movement in time, scientists discovered that numbers were not absolute, but depended on the observer’s movement in a restaurant.

The first non-absolute number is the number of people for whom the table is reserved. This will vary between the reservation, the actual amount of people showing up, the number of people joining the table during the evening and the number of people leaving the table when they see someone else turning up which they do not like.

The other non-absolute number is the arrival time – a number whose existence can only be defined as being anything other than itself. In other words the exact time when it is impossible that any other member of the party will arrive!

The novel further explains how these and other numbers are utilized in the drive so that the ship is capable of travelling two thirds across the galaxy in a matter of seconds.

Ion drives and more

Although the bistromatic drive will not be in production any time soon (or at all!) ion drives (or ion thrusters) are already a reality. The drive creates thrust by accelerating ions (charged particles) with electricity and has been used in the Deep Space 1 spacecraft that did a flyby of a nearby asteroid.

Although current ion drives does not provide blindingly fast acceleration (0-60 mph or 0-96 km/h in four days) the appeal to science fiction writers lay in the fact that the weight requirements for fuel are much lighter that traditional rockets. In theory it is also possible to ionize all elements known to man, so a spaceship could be build and fuelled on Mars, or Europa for example.

The concept of the ion drive has already been depicted as far back as 1910 by Donald W. Horner in his novel By Aeroplane to the Sun: Being the Adventures of a Daring Aviator and his Friends.

Another science fiction concept that is being taken seriously today is solar sails. Just as a normal ship use sails to harness the wind’s power, solar sails use the pressure of reflected solar radiation to propel the ship forward.

Jules Verne already mentioned the idea of light propelling spaceships in his 1865 novel From the Earth to the Moon although the science to explain this was not even available during that time. In 1951 an engineer named Carl Wiley wrote an article for Astounding Science Fiction (under a pseudonym Russel Saunders) called “Clipper Ships in Space”. This article influenced many science fiction writers, for example Pierre Boulle that mentioned these “sailcrafts” in his 1963 novel Planet of the Apes.

Finally, the most popular way of space travel used in science fiction today is of course the Hyperdrive, or faster-than-light (FTL) travel. Spaceships using this technology enters an alternative region in space, known as “hyper-space” by means of an energy field and exits at another location, thus performing travel that exceeds the speed of light.

Travelling via hyperspace, or “jumping” has been mentioned in short stories by Isaac Asimov (1940-1990), Arthur C. Clarke’s 1950 novel Technical Error, Star Trek, Star Wars as well as the Babylon 5 and Battlestar Galactica series.

Hyperspace might be a convenient way of bypassing Einstein’s Theory of Special Relativity stating that energy and mass are interchangeable, thus making speed of light travel impossible for material that weigh more than photons.

Conclusion

Space is big and even the closest star system to our own, Alpha Centauri, is 4.37 light years away.

With today’s technology the first humans that will reach Mars within the next two decades, have to endure a trip of 8.5 to 9 months just to get there. If we have any hope of reaching Alpha Centauri in one human’s lifetime, radical new ways of space travel must be invented.

And one science fiction writer’s dream today might just be the solution we were looking for all the time.

Technology behind the Panama Papers

Technology behind the Panama Papers

“Hello this is John Doe. Interested in some data?” This is how a reporter of the German newspaper Süddeutsche Zeitung was contacted in February 2015 via an encrypted chat service. This source would eventually leak 2.6 terabytes of information detailing how a Panamanian law firm helped clients to setup anonymous offshore companies and bank accounts.

This data was finally revealed to the world in April 2016 as the “Panama Papers” and the company implicated as Mossack Fonseca.

What happened in-between these two events are an almost cloak-and-dagger tale of the enormous effort by hundreds of investigative journalists, all made possible by the extensive use of technology.

John Doe

According to the German newspaper “the source wanted neither financial compensation nor anything else in return, apart from a few security measures.”  No personal meetings ever occurred and communication was always encrypted. He did indicate to the newspaper that his life was in danger.

It must be stated that, although it is not necessary illegal to have offshore bank accounts, many wealthy individuals and/or criminal organizations hide money in these accounts to prevent paying taxes in their own country. The purpose of this post is not to implicate any individuals, companies or firms that assisted or benefitted from the Mossack Fonseca scheme. It is purely a glimpse into the technology behind this leak – at least the bit that was publicly revealed.

Size of the leak

Over the next couple of months, the source would systematically release pieces of data to the German reporter. According to Süddeutsche Zeitung the amount of information shared totaled to:

  • Emails (4 804 618)
  • Database formats (3 047 306)
  • PDF documents (2 154 264)
  • Image files (1 117 026)
  • Text documents (320 166)
  • Other (2 242)

To put it into context: The Ashley Madison hack of 2015 was reported to be around 30 GB’s worth of data; the Sony Pictures leak of 2014 a massive 230 GB. The Panama Papers outweighs this by more than 10 times!

How the leak began

According to sources, the leak started as a fairly “normal” hack of Mossack Fonseca’s (MF) email servers. Also typical of these hacks were that MF was not open about it and quick to respond. As seen in so many of these cases this is partly in order to limit damage to a company’s public image, directors or boards not on par with the technical knowledge as well as insufficient technical staff to deal with it.

In addition, Forbes reported that the main MF portal clients used to access their sensitive information ran on a 3-year old version of Drupal 7.23, which had known vulnerabilities at the time that could be exploited by hackers.

So, while MF was ‘dealing’ with the situation, the anonymous source was syphoning off huge amounts of sensitive customer information and sending it to the German newspaper.

Please note that I am not promoting or glamorizing hacking at all. What I can say is that, even though hacking is illegal, there is a public tolerance towards it when a ‘bad’ company is hacked. Take the hacks of Ashley Madison vs. Sony Pictures – the former is perceived as a valiant act. Like Robin Hood stealing from the rich to give to the poor.

Finding needles in a giant haystack

The German newspaper appointed a five-person team that worked tiredly for two months to verify if the data was genuine.

It very soon became apparent that one of the major aims in the vast majority of cases were to conceal the true identities of the owners.

Trying to connect the dots in this web of complex secret transactions almost became and addiction to the team. “We often messaged each other at crazy times, like 2 a.m. or 4 a.m. about the newest findings” one of the reporters said.

But the sheer amount of data proved too much for this small investigative team. Imagine trying to find a cash receipt of a purchase made during a holiday 15 years ago and then cross-reference it to an email from a travel agency to confirm that you personally booked that holiday. Maybe not so difficult if you scanned and saved the receipt with a properly named filename and kept archives of your email conversations. Now try finding that information buried inside 2.6 TB’s worth of data without knowing any names up front nor that such a relationship should exist in the first place…

At the end, the newspaper could not sift through emails and account ledgers (covering nearly 30 years) on their own. They had to seek help and found it in the form of the International Consortium of Investigative Journalists.

Help from Down Under

The International Consortium of Investigative Journalists (ICIJ), is a non-profit organization based in Washington D.C. and has coordinated several previous projects that investigated financial data leaks.

Apart from Süddeutsche Zeitung the ICIJ invited many other influential newspapers and news agencies from across the world to form a coalition with the common goal of investigating the Panama Papers. This included Le Monde (France), The Guardian and the BBC (Britain) and La Nación (Argentina). Eventually 400 journalists joined forces in this effort.

From the start it was critical that the ICIJ investigation remained secret. But data still had to be shared between hundreds of journalists across the globe. In order to achieve this many software systems and packages had to be utilized – some open source as well as proprietary.

Journalists had to ensure that all files and their replicas were spread across different encrypted hard drives, using VeraCrypt software to lock up the information.

Süddeutsche Zeitung decided to use the software of an Australian-based company, called Nuix, which assisted the ICIJ in leak investigations before.

Nuix specializes in turning huge amounts of unstructured data into an indexed and searchable database. Its origins dates back to the year 2000 when a group of computer scientists at a Sydney university were exploring ways to process large amounts of data at high speed.

The newspaper journalists started by uploading the millions of documents to high-performance computers that were never connected to the internet in order to prevent the story from “breaking too early” or from those seeking to destroy it.

Once uploaded, they used optical character recognition (OCR) software to transform scanned images, such as ID documents or signed contracts, into human readable and searchable files. They could then start analyzing the data by applying searching algorithms provided by Nuix’s software. This allowed journalists to formulate questions that would in turn kick off the backend database search to look for matching data – exactly how web-based search engines work. The ability to index and analyze all types of data was the real key to the success behind the project.

Nuix actually stands for New Universal Intelligence Exchange engine, the name given to the software by the Australian computer scientists that developed it. The driving force behind Nuix was Jim McInerney who formed the company but passed away in 2004. Jim and his team originally started by processing email files on large scale, but they soon developed techniques to reverse engineer all major file formats, including some complex and proprietary ones like tiff images.

Just before Jim’s death, Nuix won a contract with the Australian Department of Defense. His family tried to run the company after his death, but eventually had to bring in a professional management team in 2006, led by new CEO, Eddie Sheehy that have worked with companies like Cisco before.

Some unexpected interest in their company resulted from the financial crisis of 2008.  A lot of money were lost in global financial markets due to the property bubble crash and people demanded answers. Software was supposed to help figure out what went wrong and who was to blame. Nuix became one of those solutions.

Nuix is certainly not the only firm that provides data processing solutions – even the ICIJ used other software as we will see later. Nuix made its name for being very fast in processing huge amounts of data.

Understanding files “at the level of ones and zeroes is what allows Nuix to achieve reliability and speed at scale” Eddie explains on their website.

The ICIJ’s technical army

The ICIJ is by no means just a group of ‘old-school’ investigative journalists. The have all the means and expertize required to operate in today’s digital and data-driven world.

Tools and software that were used in previous leak investigations were re-used or enhanced during this investigation. A lot of these tools were open-source.

Their search tool was based on Apache Solr and combined with Apache’s Tika, an indexing software that can also parse different file types like PDF.

They utilized a database search program called Blacklight that allowed the teams to hunt for specific names, countries or sources. On top of that there was also a real-time translation services for documents that were created in other languages. (Journalists primarily used English as the communication language).

Each news organization took their own precautions, restricting access to the secure computers that were used to connect to the ICIJ’s servers and ensuring that these were not accessible through their newsrooms’ regular networks.

The news broke

When the findings of the Panama Papers were released to the world on 3 April 2016, it immediately caught the public’s attention – mainly because some well-known and powerful individuals were implicated.

But although the story ensured some sensational newspaper headlines for many weeks it really was the hard work and collaboration effort of hundreds of individuals that worked behind the scenes that made it possible. All with the help of technology.

Sources

http://www.sueddeutsche.de

https://panamapapers.icij.org

http://www.nuix.com

http://www.forbes.com

http://www.theguardian.com

http://www.nytimes.com

https://en.wikipedia.org