The tech industry is a fast moving and fickle creature, and 2016 showed that this trend shows no sign of stopping. The many different facets of technology are beginning to fragment into their own solid disciplines, whilst at the same time staying deeply rooted in code-based innovation. It certainly feels like an exciting time to be alive, being people who are watching humans evolve from a physical age to a seemingly inescapable virtual age. In that sense, the merge will soon be something very real and almost normal to all of us.

React Native gained traction and Microsoft joined the party

React Native came about in 2015, but 2016 was its year. Working out how to make JavaScript translate to native, cross-device applications was a super smart move by Facebook with thousands of developers flocking to use the technology. It’s super simple too; the native bindings are easy to use and well documented and we’re itching to use it more.

In a similar vein, Microsoft acquired Xamarin, a controversial acquisition which I feel is going to pay dividends for the tech monolith. Xamarin is Microsoft’s stab at bringing C# to all devices enabling developers to deploy to multiple devices simultaneously. To back it up Microsoft has also released a beta version of Visual Studio for the Mac and open sourced the .net framework, which is a massive shift from the historical way in which microsoft has worked and a nod to the open source community.

Y+S started playing with AR/VR

The road to AR/VR competency is going to be a long one, fraught with various difficulties and failures, but one which we’re committed to investigating to its fullest potential. In the last month or so, we’ve started to invest man hours into getting our head around this technology. The main points of research so far include Unity and Vuforia.

To those who aren’t aware, Unity is a 3D application that was originally touted for game development, but in recent times has been recognised as one of the strongest players in the AR/VR sphere. After some research we also happened across Vuforia, which in their own words is an Augmented Reality Software Development Kit (SDK) for mobile devices that enables the creation of augmented reality applications. It uses Computer Vision technology to recognize and track planar images (image targets) and simple 3D objects, such as boxes, in real time. In layman’s terms it takes all of the hard work out of doing cool shit with AR, and we’re pretty excited to see what comes from it.

Microsoft Hololens got me excited

Microsoft unfortunately has cemented its current reputation as underdog to the seemingly unstoppable Apple; but with a new CEO in the mix, Satya Nadella, things are really starting to pick up from them on the innovation front. Perhaps the most exciting was the Microsoft Hololens, which showed Microsoft’s commitment to augmented reality.

Personally, I always thought VR was a bit of a pipe dream. The need to have large goggles to transport you into a completely different, sometimes alien, world seemingly pigeonholes it to a life of gaming and non-utilitarian uses. AR, on the other hand, when executed well, brings untold amounts of possibilities to extending the physical world as we know it, and that excites me.

Microsoft’s strapline for this was ‘Mixed reality: the world is your canvas’. What’s not to love? Watching people interact with the Microsoft suite of products in their living room, and giving product demonstrations for products that aren’t even built yet was enough to spark my enthusiasm.

Although it’s currently quite a bulky technology, you can see where they want to go. Even though the Google Glass was a flop (perhaps before it’s time?), the Hololens is a work in progress, and Microsoft aren’t scared to admit it. It’s inspired me to start playing around with a few personal projects of my own.

Web Assembly slowly gaining support

WebAssembly, or WASM for short, is low-level version of JavaScript which is touted to be faster and able to handle performance-critical operations a lot more efficiently than JavaScript does currently.

What’s exciting about that? Well, the main things that slow down any application are processing the data and compiling down to what the machine can read; and for all the fancy things JavaScript and other languages do in terms of creating readable text-based code, garbage collection, object declarations etc, it just slows everything down. WebAssembly is just raw data, and lots of it. One of the terms being bandied around is SIMD, which stands for ‘single instruction, multiple data’. Essentially, just get all your data in a row, and then fire it off with a single instruction.

This opens up some amazing capabilities. Full speed in browser gaming and VR will be a thing, as will porting of high compute, low latency apps like music production software or video editing, like Adobe Premier Pro. That’s not to say that this isn’t possible with JavaScript, and great inroads are being made to increase the performance of JavaScript, especially by the guys at Node and the Microsoft Chakra Core team. WebAssembly will just make things a little less, well, awkward.

Google’s AI translating between languages it hasn’t been trained on

There’s clearly no stopping the AI train, and whether it scares the hell out of you or not, it’s something to get used to.

The guys at Google whose new Neural Network, aptly named Google Neural Machine Translation (GNMT) showcases itself as an ‘end-to-end learning framework that learns from millions of examples’.

What’s amazing about it is that it is able to train itself to translate between language pairs which had never been seen by the system. So how does this work? The machine gets fed two sets of translations to train, between Japanese and English, and Korean and English. This knowledge is then passed around the system so that when a Korean to Japanese translation is requested, the machine can accurately translate as required.

Neural networks, like GNMT are largely modelled on the approach the human brain takes to decision making. It takes a variety inputs, adds weights of importance which then influences the outputs. The weights of importance are where the smarts come in. As the neural network is trained it will modify its weights depending on the inputs, outputs and the errors which it encounters.

Bots Bots Bots

Finally we get to the real big hitter of 2016. If there was anything that was the most talked about between creatives and developers, and the most requested technology, it’s the chatbot. Chatbots bridge the gap between creating something which feels personal to the end user, whilst removing any barrier to entry.

The reason it comes up time and time again as a winner against ideas involving apps, is that they don’t usually require an install, and can engage with users intelligently and easily. One of the biggest challenges which bots face is understanding the subtle nuances of languages, and that is handled gracefully by Wit AI, which is part of the growing hoard of natural language processors (NLP) which are due to flood the market.

Chatbots can be used for almost anything from engaging with your customers to offering suggestions based on a simple criteria. Anything really. The technology itself is relatively simple; the user sends a message that gets analysed by Wit, the intentions are taken from it which then make the relevant API requests and the data is returned.

We’re currently focusing on chatbots for Facebook, and occasionally for Slack when we have an idea for how to automate something internally, but bots are also supported by WhatsApp, Skype and KiK.