Friday, November 7, 2014

Nocturnal Emissions

I share a fortunate ability/facility with a good percentage of software developers. After working on a knotty but unresolved problem during the day, I often wake at some point in the night with a solution. Happily, and unlike memories of dreams that quickly melt away, I can always be confident that the night-time solution will be fresh and available to me in the morning when I get back to work. In almost every instance, the solution is viable and optimal although on odd occasions it needs a tweak or two to reach that state. More strangely, I sometimes wake during the night aware that my brain has identified a bug in a block of code written the previous day – with which, as far as I am aware, I was pretty content.

What this says about how brains work or even about consciousness I have not really put too much thought into – well until recently. I read a reply from “The Woz” to a correspondent on the arcane subject of electrical noise on one of the power busses of the Apple 1 board. It included the following:

I awoke one night in Quito, Ecuador, this year and came up with a way to save a chip or two from the Apple II, and a trivial way to have the 2 grays of the Apple II be different (light gray and dark gray) but it’s 38 years too late.

Truly wonderful. I doubt that I would ever have an experience like that but then again I am far from sharing the genius of Steve Wozniak and his mind boggling ability to reduce hardware and software to its bare essentials and then build the simplest thing that “just works”. I am just happy that I also have the odd night time emission. But what does this say about our brains and their abilities?

I suppose we have come to understand from pop science texts that our brains do stuff and then let the bit that runs our conscious selves know about whatever it is in time to make our conscious selves think we are in charge. But “knowing” that and believing it are two distinct things. The evidence is there but largely we choose to ignore things that we experience that might otherwise support the notion that “we” are not in charge – with our conscious selves just along for the ride. Apart from anything else the criminal justice implications are moot to say the least – it is much simpler to accept the premise that the bit of ourselves we call “me” is responsible for what we do.

So, do I have a “Turing machine” that runs code in my head? [Can’t be an Intel chip emulation because it rarely tussles with Assembler.] Does Woz have an Apple II board emulated in neurons? How does this work? More - how does detailed and skilled analysis get done without conscious direction and intervention? Simplest answer is, of course, that conscious intervention has no part to play – probably just turning up at the end of the show to take an undeserved bow. The unconscious portions of our brains are just working as normal. One wonders, just how many tasks can our brains work on simultaneously and autonomously?

I think that just about all car drivers will have experienced moment on the road when they realise they have been driving for some time without conscious intervention. This can occur during periods of relatively complex city driving as well as during simpler dual carriageway motoring. Given that we have to acknowledge that those unconscious driving episodes were apparently well executed and risk free, just what does it say about the part played by consciousness?

Does any of this have anything relevant to say within the context of a company blog?

Well sort of.

We did not make the cut on the second round of SBRI funding for the Betsi Cadwaladr University Health Board 10% challenge. Our reaction was one of disappointment and relief. We were disappointed that we could not execute on our plan because we had a great deal of confidence in its potential to deliver the necessary benefits. There was also some relief, as it was clear that the second round funding level was going to fall well short of the full requirement and the task of raising the full sum required to deliver was looking somewhat daunting and (inevitably) tangential to the main objectives. We had been pretty surprised that the second round selection process quite simply failed to address what we saw as key issues – namely the viability and financial health of the projects and the businesses involved. There again, this was far from the most astonishing experience or discovery made during the proof of concept stage.

Given that we developed Intellectual Property (as it is grandly referred to) and technologies and proven their relevance to the SBRI project objectives, what are we doing with those concepts and ideas now? Just what are we up to?

All I can say is that there is an ongoing process and that this is proceeding in an organic sort of way without a lot of direct conscious intervention. See – I said the preamble would have some relevance. There is a long background process we are executing on and we are confident that the right solution and direction will pop out and set us off running – shortly. We just have to decide which elements we are most attached to and which can be supported by a viable business plan. Once our processes have arrived at the optimal solution the conscious part of the business can start in on managing the execution.

Tuesday, September 16, 2014

Web Components

I must have come across mention of “web Components” one of the times I reviewed the emerging standards of HTML5 – but put them to one side against the day when they became concrete enough to be worth following up. For them to become established we needed browser support for some specific items, namely; HTML Imports, Shadow DOM, Templates and the ability to define and consume web components. I think I looked away for too long but fortunately ran into a Google IO presentation on Polymer

It looked to me, when watching the video, as if web components addressed a few issues that came up while we were building a very large web application as part of our SBRI “Proof of Concept” project earlier this year. So what were some of the things we found “good” and some things we found “bad” when building a very large web application?

Certainly we have to list JavaScript modules under the good heading. This approach to developing JavaScript allowed us to write a lot of code with explicit name spaces and without having to worry about variable scope leaking. We ended up with a lot of modules dedicated to a particular set of functionality and happily “chatting” between themselves to ensure there was a consistent view of application state. You can find a clear introduction to the code pattern here if it is unfamiliar.

As a great deal of the JavaScript was involved in DOM manipulation we saw jQuery as a foundation technology and used custom jQuery UI Widgets to enhance the UI and provide re-usable code that addressed repeated functionality within the web page. As an example, we have a fancy custom widget that consumes html tables and turns them into sortable, searchable, scrollable data grids with the support code encapsulated in just the same way as our JavaScript modules but capable of supporting multiple instances. This was a good start but did not go very far in reducing the overall volume of html that had to be written and de-bugged. Editing thousands of lines of HTML can be quite a burden – it is easy to get lost – so one thing that was bad was that it is difficult to break html pages down into modules and particularly difficult to re-use blocks of html when similar layout and functionality is required in multiple places within a web page. Some sort of “templating” could be just the thing here if you could just “import” a reference to a given template.

CSS3 was a surprise late entry on the “bad” list. Don’t get me wrong, CSS is brilliant and allows us to style and add functionality to a web page in an ordered and predictable way. However there are times when you just wish you could turn it off for a given section on a page. You have some html elements that are being styled incorrectly at a particular point as “rules” cascade. Unpicking or backtracking the cascade and then adding to the CSS to override particular styling for specific elements (or groups of elements) can be time consuming and adds to the volume of traffic that needs to be downloaded to render the page in the browser. Wouldn’t it be nice to be able to just define some specific CSS rules that apply to a given web page section with no interference from any cascaded or global settings? What you need of course is a “Shadow DOM” – a place where CSS rules can only penetrate from “outside” when you explicitly define that behaviour.

The Polymer project from Google introduces some excellent core web components that you can build upon to create new components of your own. They lend themselves to responsive design techniques and provide the foundation for re-usable web page chunks (that can, indeed should, include CSS and JavaScript) ready to be applied with local overrides as required. Components can be nested and combined to create a highly functional UI with the code modularised and organised in a way that is particularly attractive to developers and (I strongly suspect) designers.

This is not a technical blog (although this post strays into technique) so I won’t get into the detail of getting web components up and running but I can say that it is very simple and you can ignore things like bower, python web servers and node.js (not that node is not a great tool in its own right) and get things quickly working with a vanilla web server (even IIS Express).

Thursday, September 11, 2014

Microsoft Toys

Microsoft sent me a new toy today – it arrived via Fedex from the USA in a carefully packed cardboard box. The box contained four individual packages.

One contains a 16GB Micro SD card (that reputedly has a Windows image of some sort on it) and a 5V 10ma LED.

A second contains a cat5 Ethernet cable.

The third a nicely boxed USB to Ethernet adapter from Network Adapters.

And the fourth (the main component), an Intel Galileo dev board and power supply.

All this for free, to just see what I might make with it all – and presumably to demonstrate Microsoft’s new interest in the “Internet Of Things” (IOT) – which is going to be the “next big thing” possibly. I have some reservations – just how many smart light-bulbs that require a phone or tablet to turn on and off can anyone use? However, I can see the opportunities that might accrue from a wide array of active sensors/actuators working together to a collective end and acknowledge that software will need to be developed to build practical applications and sensible APIs. Plus I have a data collection concept in mind that might very well suit.

As we are having a bit of a break from the SBRI project while our erstwhile NHS partners make up their mind if they want to hitch themselves (and Welsh Government money) to our rising star for a second phase – the timing is near perfect.

So, off to the Microsoft Developer’s Program web site for instructions on how to get started at .

  1. Install Visual Studio 2013 – already got that so “check”.
  2. Download WindowsDeveloperProgramforIOT.msi with a nice clickable link – and things started to go wrong. A Google search sorted out the need for a couple of undocumented rounds of “logging in” to stuff and accepting EULA’s left right and centre and then I got to a place where it was indeed possible to download the file and install the relevant VS extension kit.
  3. Check that a Telnet client is installed on the dev machine – I am a PUTTY fan myself (see as it can do rather more out of the box – even though there is no actual box

You can read about the Galileo board here but in short - “It’s a board based on Intel® architecture designed to be hardware and software pin-compatible with Arduino shields designed for the Uno R3.”

The Galileo can be programmed using the standard Arduino tools and a dialect of C or (as in this case I suppose) it can run a Windows OS and be programmed using Visual C++

This could be interesting, as I can run some mini dev projects on a (now classic) Arduino R3, a Netduino (writing C# code currently in VS 2012) and the Galileo board using C++ and VS 2013. Plus I suppose I should consider the Raspberry Pi I (last used to emulate an iBeacon before I got some sample dedicated units) – running – what? – Python? Desk space is what I am going to need.

Wish me luck.

Tuesday, May 27, 2014

Adit Limited revamps and repositions

We are pleased to announce that Adit Limited has transferred the ownership of some significant IP (Intellectual Property) in the form of copyright to one of our long term business partners Marston's plc. This transaction will allow Marston's greater freedom to enhance some key customer facing services and develop new ways to deliver those services. 

Adit Limited are also transferring all IP in the Sea Kayak Wales website together with all associated maritime mobile apps to the business trading as Anglesey Stick. In addition, certain legacy products that have reached the end of their lives will be withdrawn from sale (or free download) and no new support contracts for those products will be agreed.

Adit Limited will take advantage of associated reductions in their legacy code base and anticipate that this will result in greater freedom to restructure and subsequently include new shareholders. We are re-inventing the company as a software startup focused on out HAPTIC venture and the healthcare market.

Wednesday, March 26, 2014

SBRI Challenge And Our Healthcare Project

We are delighted to announce that Adit Limited has won proof of concept funding under an SBRI (Small Business Research Initiative) challenge and we are looking forward to working with Betsi Cadwaladr University Health Board to complete this important project over the coming months and years.

The “press release” goes like this:

Health And Patient Treatment Information Centre (HAPTIC)

The SBRI challenge to enable nurses to spend 10% more time with patients in direct value added care struck an immediate chord with Adit Ltd. The challenge lies in implementing a major change to systems and procedures on a busy ward, in contrast to the more usual static office environment. This requires something different, even if some of the foundations are familiar. HAPTIC will deliver user (patient and staff) friendly systems to support an environment thriving on constantly changing priorities and clinically urgent interruptions. 

HAPTIC will ensure that patients and their carers are always at the centre of the solution by delivering a consistent set of services at the nurse station and at the bedside. Using light, modern tablets for bedside delivery to the nurse, patient or carer, backed up by intuitive software that maximises the support given to the nursing team, with minimal input or break in the work-flow. Through HAPTIC, Adit Ltd identified opportunities to apply modern location technology and precise patient identification as a start point. Building a resilient software solution founded upon accurate, and up to the minute, patient data then became realistic even in this challenging environment. Focused, interactive applications enhance the user experience, enables near real time patient feedback and facilitates co-production of care.

Our project seeks to add value at every step ensuring that the nursing and medical teams make the gains they need to deliver enhanced patient care.

What we really want to communicate is just how pleased we are to working with a great team at BCUHB and how much we are looking forward to blowing them away by demonstrating just what we can do to enhance the ward routine and assist them in improving patient outcomes.

Now comes a ton of hard work as we have a remarkably short time slot to deliver on our promises with sample software and technology evaluations. It is going to be an exciting ride.

Monday, March 24, 2014

More Data

Pushing software engineering boundaries can involve many different approaches. There is no overarching way that always leads to success.

If the fields of robotics and Artificial Intelligence (AI) more data and in particular better quality data can trump a lot of work on sophisticated algorithms.

We have a project that needs to grab all of the potential efficiency gains there are going. One obvious candidate is to implement a predictive text feature - you know, where the software predicts what you are going to type next and offers one or more suggestion. However some careful thought showed that this might not be the low hanging fruit it first appeared to be. It will be vital in many instances that any text suggestion be correct - not making a suggestion would be better than offering the wrong one. This is an area where extreme accuracy is a key safety issue with, potentially, people's lives on the line. If there can be many very similar words or phrases but in circumstances where those similar pieces of text have different meanings or might describe different things then we have to face up to the fact that we have to be right 100% of the time.

We think this can be achieved by widening the lexical and semantic nets and supporting the analysis of a great many words and phrases to achieve the objective.

I am sure we will post more on this topic as we develop and refine the techniques we end up applying - in the mean time it is another area we need to carefully research. Others may have trod this path before us. Plus we are going to need a lot of quality data to run trials and tests on.

Full Stack Startup

I was reading a post by Chris Dixon which may well have coined the term "Full Stack Startup" - you can read it here

I am sure he will forgive my quoting:

Suppose you develop a new technology that is valuable to some industry. The old approach was to sell or license your technology to the existing companies in that industry. The new approach is to build a complete, end-to-end product or service that bypasses existing companies.
Prominent examples of this “full stack” approach include Tesla, Warby Parker, Uber, Harry’s, Nest, Buzzfeed, and Netflix. Most of these companies had “partial stack” antecedents that either failed or ended up being relatively small businesses. 
Now this interested me greatly as we are working on a (soon to be announced) project where we have taken the view that we need to address the whole problem domain in order to deliver a complete solution. To do that we may well need to incorporate partial solutions worked up by others but overall we see ourselves as offering a "full stack". There are commercial advantages to be gained here alongside the clarity that stems from keeping the whole development area in view.

Tuesday, March 11, 2014

Agility - now there is a thing

I was reminded by a recent post by Dave Thomas, who was one of the original publishers of the Agile Manifesto, that the movement started from some very simple ideas.

 These were:

  • Individuals and Interactions over Processes and Tools
  • Working Software over Comprehensive Documentation
  • Customer Collaboration over Contract Negotiation, and
  • Responding to Change over Following a Plan

but Agile became a noun and then a product. In most instances it became a "Cargo Cult" where the IT departments used the right words and symbols but did not understand how to get things done or even what the purpose was.

Despite the enthusiasm of so many developers (cultists included) many of us looked askance at this new "magic bullet" and waited for it to pass the way of so many predecessors. This is a shame as the initial ideas were all about delivering what the end user needed rather than what they (or their managers) said they wanted and frankly that has always been what I have been about.

Dave Thomas has now re-defined the concepts for a re-boot as:

What to do:

  • Find out where you are
  • Take a small step towards your goal
  • Adjust your understanding based on what you learned
  • Repeat

How to do it:

  • When faced with two of more alternatives that deliver roughly the same value, take the path that makes future change easier.

Now that I can sign up to. It is how I work day to day - with or without a particular customer or user in mind. Consider me a post-Agile developer.

Friday, February 28, 2014

I was recently reminded by a post of John Gruber's of Steve Jobs making a point about the way technology should be used when developing new products.

Jobs said:
"One of the things I've always found is that you've got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it. And I've made this mistake probably more than anybody else in this room. And I got the scar tissue to prove it."

He was probably referring to his experience running NeXT and when he returned to Apple one of his first actions was to kill off the pure R&D function.

This week, Google announced their project Tango which is a glorious piece of technology that is going to appeal to any development team working in mobile - especially if the have an interest in location sensing apps. I am sure that a lot of them applied for one of the very limited number of development kits - I know that we did. Is there a danger that you see something ground breaking like "Tango" and then look for a way to apply it? Well that might be true but with the number of project we have a current interest in that are pushing the limits of what we can currently deliver using existing location finding techniques I would surely like to see just how much further we could get using "vision" processing.

Tuesday, February 25, 2014

I have just clicked the restart button on the rather tired content of this blog. For some reason long neglected. Not all all sure why so many new and exciting mobile (and related) developments have passed without due comment and analysis here.

So here is to turning over a new leaf and sharing some of the things we are working on here at Adit Limited.