At the same time, I can safely say that communication is the secret ingredient in any team effort. Whether it is sports, family or building digital services. It was until I was around 14 years old when I had my a-ha moment regarding the importance of communication. We had a basketball team of talented individuals, but failed to reach our full potential and lost tight matches. It felt miserable back then.
Having watched our team crawl, our coach told us that we need to start seeing and hearing each other on the court. He stressed that to be seen and heard, we needed to make ourselves heard and seen as well. And that’s what we started to do. Communicate explicitly by speaking and complemented that with non-verbal body language and various mutually agreed signs.
Most importantly, we were always on the look for receiving signals from others, not primarily focused in self-centric talking. We transformed into a team that truly played together as a single unit, had fun on the court, and as a by-product of all this, we started to win and were the team to beat.
That’s all fine and dandy, but how does this really map to software product development? In the good old agile manifesto we agilists prefer “Individuals and interactions over processes and tools” and “customer collaboration over contract negotiation“. Two of the four pillars guiding us in modern development processes emphasize the importance of crisp communication.
Also, the classic Five Orders of Ignorance publication states that software development can be viewed as knowledge acquisition and ignorance reduction. It is easy to agree with this, especially when working on a greenfield project. In these projects, the information is often gathered by applying human-centric design methods into product development. The success is highly correlated with the ability to communicate and capture the problem domain into designs and software models.
When discussing product development, we can’t bypass Conway’s law, which states that “organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organization”. If we combine poor communication into this mix, then what kind of systems are we expected to create? We end up modelling mistaken and inaccurate communication structures into systematic machines that are very effective in executing wrong orders.
Many years ago, I had joined a new team and I was highly motivated to ship some code and show my skills. The team’s product owner played his part and we had a well-organized backlog at our disposal. But there was something awfully wrong. Silencio, our team was silent most of the time. We gathered for the mechanic daily stand up, had quiet lunch and then went home. The business problems we tried to solve were pretty straightforward with very little business logic, but we still had no clue of the correct business rules. The customer was not co-located and we were a disconnected team.
Technically we played the Scrum game well, but we did not deliver and I felt lost. Regardless of my feeble efforts to get communication to a par level, I failed. We had no connection and no communication medium. Luckily the team was dismantled due to other business reasons. That team had no capability to produce purposeful software regardless of our technical skills.
The dysfunctional team was very difficult for me mentally, because prior to this failed team I had just worked in a team that communicated with a totally different mindset. The business domain was extremely complex and the team had no prior history of working together.
Regardless of all this, we were able to nail down the complex requirements. We discussed every corner of the domain from different angles and did that constantly. We listened actively. We modeled the domain visually and had it on our team wall. The customer was co-located and we were able to go through every single detail whenever there was a need for it. Also, we had a psychologically safe environment, so we weren’t afraid of mistakes and could question the requirements if we had a more suitable solution in mind. Without this high level of communication, the project would have been a disaster.
How to get communication right then? Well, according to Wiio’s theory we can only succeed accidentally, right? But there are a few things which we can do to increase our odds in hitting the accident. Number one thing is to learn to listen, which is a lot harder than many think. Listening is not waiting for your turn to talk. Listening is asking follow-up questions and being interested in the message and the one who delivers it.
The second most important thing is to make sure we are explicit in transmitting information and make sure that the message has been received. Also, when we receive information we need to be explicit in our acknowledgment response or ask complementary questions. We developers can look for an example from HTTP protocol or any programming API. The more explicit schema or type system we have, the more likely we are to succeed in utilizing third-party components. Communication is always a two-way street: request and response, not forgetting data validation and processing - listening.
Misconceptions are common and often the root cause of many failures. The more complex and difficult the topic the more likely one should favor face-to-face communication. Body language and facial expressions provide indispensable context-specific information when meeting people in real life.
Regardless of my efforts to focus on communication, I still often fail. There is always room for improvement. When we fail in communication, relationships can end, countries are driven into conflicts and the products we build to help our customers don’t serve the purpose. We make poor decisions based on misinterpreted and incomplete information. But I believe there is still hope. If we choose to put that little extra effort into communication, listen with empathy and be explicit in transmitting and receiving the data, we might well end up crafting purposeful products and make the world a better place. I challenge you to try it.
This post was originally published in Wunderdog’s Blog
]]>However, there is still a significant disconnect between development and operations regardless of all the DevOps buzz. Developers code, configure infrastructure and take care of the deployments. But then, there is the infamous someone who takes care of the operations and monitoring. I call this approach DevOpsBut, which might ring a bell for those familiar with ScrumBut.
The harsh reality is that our code creates value only when it runs in production. Regardless of how well we implement and test our code, none of our development elegance can guarantee that our service runs smoothly in production. There are just too many moving pieces in the puzzle.
From a bookkeeping perspective, we can think of our codebase as a liability, and it can only be considered an asset when it is delivering business impact in production. Therefore having operable software should be every developer’s top concern.
Any non-trivial software system contains essential complexity which is inherent to the domain. There is no way around that. We also tend to be guilty of building some amounts of accidental complexity by our designs regardless of how hard we try to avoid it. With that said, the only way to be in control of such a complex system in production is to measure the key business metrics and have a close eye on them continuously.
First, our customers can interact with our service in the most peculiar ways, which can lead to unexpected results and errors. Different end user devices, desktop and mobile browsers included, provide many browser quirks. There is no limit to the amount of manual exploratory testing to avoid this.
Second, it is possible that our user interface is too complicated, and our end users cannot get the desired actions done. In this case, everything is fine from a technical perspective - not a single trace of error anywhere, but neither are there any expected business transactions happening. Without business metric based monitoring we are happy while our customers are frustrated. How uncomfortable is that?
Third, software is turtles all the way down. For example, the downstream dependencies can become unavailable, or even worse, start responding slowly. Some parts of our core infrastructure might melt away for whatever reason. In a complex system, a small, innocent-looking anomaly can cause cascading failures, which might not seem interrelated. Due to concurrency, the sequence of events can be non-linear. These are scenarios that are close to impossible to foresee during development.
In general, there are two methods of quality assurance. One approach is to make sure there are no bugs introduced in the first place. We focus our efforts on carefully testing our systems before deploying to production. With this laborious approach, the frequency of bugs found is small. This is all about optimizing the mean time between failures, or MTBF. The goal is to minimize the number of bugs and maximize the time between outages, regardless of the cost.
Another approach is to make sure that whenever an anomaly happens in production, we know it in the very first second and have detailed diagnostic data. With a proper deployment pipeline in place, we can fix the problem quickly, and the impact of this failure is small in both scale and time. Such thinking optimizes the process for mean time to recovery, or MTTR. Bug count is not the primary concern. Instead, we make sure that we are aware of any exceptions within the first second and can fix them promptly.
My question now is that would you rather live with the illusion of having perfect codebase surrounded by an ideal environment, or having state of the art tools in place to inform you when the unexpected happens in production? To be the first one to know of any issues. Or in contrast to have customers report them. Which one of these approaches make you sleep better during nights? Since resources are limited and software is complex, I would try to minimize the mean time to recovery over maximizing mean time between failures.
Monitoring merely technical data is better than nothing, but it will only get us halfway there. The next step is to start monitoring KPI’s in real time. For an online web store, this can be the amount of logged in users, signups, shopping cart checkouts or payments. When we have collected the baseline figures for these transactions, we can setup alerts which are triggered on unexpected drops or peaks in our service metrics. Now we know when our service is healthy from the business perspective as well.
By monitoring business transactions, we can start to measure the impact of our releases. It can be an incredible game changer in the way we develop software. Instead of focusing on output and features such as “customer can sign up using social login”, we can start to measure outcomes like “we get more customers signing up for our service.” Now we can optimize development efforts on business goals using data. It is a fundamentally different approach compared to prioritizing work based on the highest paid person’s opinion.
Getting proper monitoring in place is a bit similar to a well-crafted test suite or application security. It is not easy to wire these on top of the application later. Things like elementary server monitoring and health checks are easier to achieve compared to going all the way with an MTTR-optimized development process.
Should developers focus on monitoring and stop writing automated tests? Of course not, that would be insane. There are numerous reasons for having a test suite in place. Tests are our safety net. They allow us to develop features faster, enforce modular design, and help in documenting system behavior.
Monitoring is our safety net in the most critical environment: production. Therefore, all developers should ensure that their applications are robust from an operations perspective as well. All we need is to add a bit of “operations-driven development” mentality to our daily development routines. But then how do we take care of all this in practice? Well, that is another story and another blog post.
This post was originally published in Wunderdog’s blog
]]>Depending on the context, there are cases where dynamically typed languages are better match and cases where statically typed languages are better suited. When you want to get something simple done quickly, then strong and static typing system might feel like overkill and something like NodeJS + Express might be the ideal fit.
When the application domain is large and complex, involving a lot of data manipulation, then statically typed languages might help you in taming the domain. But as always in matters related to programming and technical decisions, your mileage may vary.
Back to Javascript however. In the autumn of 2014, Facebook developed a static type checker for Javascript called Flowtype. Flow is an additional type system syntax on top of vanilla JS and it is transpiled using a tool like Babel in the same way as ES6. I thought I’d give Flow a try with one of my hobby projects, finnish-business-ids. The repo is very simple, but it contains all the bits and pieces to get running with Flow.
Flow’s syntax builds on top of Javascript and features static typing for simple variables, arrays, classes, objects, functions and so forth. You can check the details from the documentation. I will focus here on demonstrating the impact on developer experience when using Flow with an editor like IDEA.
IDEA can be configured to use Flow as “Javascript language version”. After this, IDEA will offer you autocomplete for functions with type information as illustrated in the screenshot below. You can also declare your own types, which is extremely handy when using the òptions-pattern.
You can configure ESLint to enforce the level of additional type annotations you want to have for your project. In my example project, I have configured ESLint to enforce type annotation on function parameter types and return types. Now that my editor is aware of this, it will highlight the erroneous lines.
To play it really safe I have ESlint with Flow configured in my “pre-commit”-hook, which is configured in package.json. This gives me fast feedback from my editor and the safety before committing changes to Git.
So what kind of magic does Flow do under the hood? In the transpiled code below we can see Flow adding a few lines for checking the parameters and the return value as seen in the screenshot below. This might bring a very small performance impact, but nothing to worry in this case.
I’m not 100% convinced that I would pull Flow for my next NodeJS project. I kind of like the simple approach and the possibility to decide the level of static typing I want using ESLint. However, if the domain I am working is complex, then perhaps a programming language with strong type system might be the weapon of choice. If I want something simple and straight forward with Javascript, then perhaps I can live without static typing.
]]>My blogging platform is Hexo, a static site generator, which doesn’t require database or any other tool than plain files - all the content is stored as YAML files on disk. I can use my favorite text editor Atom and do all the writing offline. This allows me to keep the sources for the site and posts in Git hosted at Bitbucket.org. So, now I have version control for my blog as well.
The Hexo-generated static site is rsynched over ssh to a Digital Ocean server, which is running Nginx to serve this blog. The rsync deployment is taken care by Hexo, all I need to do is to configure ssh keys and run npm dist
, which executes hexo generate && hexo deploy
. To make things even more simple, I could change Digital Ocean server to AWS S3 and serve that the site using S3’s website-feature. That would be very straight forward, because Hexo has an S3 website -plugin.
Static site generators work on the principle of generating all the site content in a single batch. The end result is a set of all the html, js and css and images that are needed to render the site. The site is generated once and then transferred to the web server, which only needs to serve static files. This means no database or any other runtime dependency that could easily be unavailable. Also, when using a static site generator, I am not at the mercy of a shaky internet connection, since I am not editing the blog via a browser on some remote site.
There is no need to serve blog entries from the database, because the content doesn’t change based on the blog reader’s actions. It is the same site and same articles on every single request. Anything dynamic - such as database and server side programming - adds only unnecessary runtime complexity in the context of blogging. This kind of complexity can lead to unexpected outages. But, when you only serve static content through a reliable web server, chances are quite high that things will not fall apart unexpectedly.
The performance of a site served from plain static content is of course very good because the operating system should take care of serving the files from memory.
If there is a need to add commenting to the blog, a service such as Disqus can be integrated on the client-side easily. Since the dynamic content is not integrated on the server, our setup remains still very robust. There are a few security risks related to this, but I can live with those since this is just a personal blog. Web analytics can be implemented using either client-side tools like Google Analytics or simply visualize access logs on server side using something as simple as AWStats.
For me, static site generators offer flexibility, power to choose the tools I want to use and the robustness of a bedrock. A happy blogging experience.
]]>The idea of building a system from small independent components is not that new. For example, the tooling in unix system was built on top of this principle around 40 years ago and the concept of unix pipes was first presented in 1964 by Doug McIlroy. That’s about half a century ago. Today’s micro libraries have much in common with our favourite pocket knives such as find, cat and grep.
In similar fashion a micro library does one thing and one thing only, usually very well. For example, cookie.js is a 0.9 kB utility that simplifies cookie handling and that’s all there is to it. In contrast, a framework or a larger library does a handful of things, usually with at least a few tradeoffs. It’s pretty much the same as with the tool popularized by MacGyver - swiss army knife. It often gets the job done, but you are left with the feeling that something is not right.
You would like to have a specialized tool for the job and preferably avoid the tradeoffs. This is one area where micro services and micro libraries excel – they are simple and straight to the point. Don’t get me wrong, there is room for frameworks and larger opinionated libraries. For example, in UI development frameworks help you in organizing the codebase and make your life easier. Other examples can be found from web app frameworks.
Micro libraries carry a small footprint in terms of file size, usually not more than 5 kB. Second important element in a micro library is the “zero dependencies”-thinking. Having these constraints make Javascript micro libraries ideal candidates for your application front end. You want to offer your customers the slickest UX on your service - not have customers to wait for that 1000 kB chunk of libraries, which of you are only using 5%. Regardless of your caching strategy or how smart you are in packaging your static assets, the first visit will always be expensive in terms of page load time. Remember that you usually get only one chance to make that vital first impression.
The idea of packaging functionality into libraries is nothing new in web development either. We’ve seen useful specialized libraries before, but also witnessed them get bloated when it comes to functionality and size. Yes jQuery, I am looking at you and your 72 kB minified, gzipped bundle.
An example of a modern micro library is lscache. It gives you the capability to cache data in the browser with support for expiration times and cache recycling - all in one convenient 1.1 kB file. No strings attached and it works like a charm.
Introducing boundaries to a collection of utility libraries might feel like a bad idea at first. You might ask yourself why to include ten different small libraries to your app, if all those could be delivered in one handy bundle. Why to deliberately hassle with so many dependencies? I can understand this mindset, but there is other side to this coin as well.
First, the term library anchors our thinking in a wrong way, because it refers to a collection of things. A software library doesn’t have to contain a multitude of different functionalities. Second, we developers are trained to see structures and abstractions even in places where such don’t necessarily exist. Third, perhaps in the past larger libraries were the only reasonable way to re-use code across codebases due to the need for manual plumbing.
But today there is no justified reason to package utility code into larger libraries. The code can be seamlessly distributed in small libraries, included to your app, and later packaged into one bundle for deployment. Thinking in small starts to make a whole lot of sense.
Having clear boundaries on what a single library does is beneficial in many ways. First, it is easier to develop the functionality, because you have fewer things to worry about in one codebase. This means focus, which is the key ingredient for better productivity. And we know that productivity usually leads to happier days.
Second, if the micro library changes, it changes for a single reason. This is a recipe towards a clean codebase. When the domain is limited, there are better chances in managing the pitfalls of dynamically typed Javascript.
Third, if the developer of the micro library decides to abandon the project you are depending on, chances are quite high that you can easily take over the library and continue developing it, because the domain is very limited. This is a way to minimize risks when you don’t lay all your eggs in a single basket.
There are a couple of things I have solved numerous times during my career as a developer. I have validated Finnish social security numbers, bank account numbers and various business ids over and over again using Javascript. I know that nowadays there are a few libraries available for these tasks, but they weren’t exactly what I was looking for.
First of all, I wanted a library that doesn’t drag the whole internet with it. Actually, I wanted zero additional calories – not a single dependency. This is especially relevant after the little npm incident couple of weeks ago. Second, I wanted the library to help me in generating test data for my end-to-end tests. This is really helpful in those cases when the data, for example ssn, is defined as unique in the underlying datastore and you are writing end-to-end browser tests. As a programmer, I immediately saw the opportunity to “make my own and save some money”.
Without a moment of hesitation, I decided to wrap these functionalities into three separate repositories and make them available for others: finnish-ssn, finnish-business-ids and finnish-bank-utils. The coding was pretty straightforward, but this wasn’t an exercise in learning any new skills. I had far more important goals in mind this time.
If there is one single thing I got from my trivial experiment, it was the feeling that I have made someone’s life easier with my code. Perhaps not immediately, but most likely in the future. These three micro libraries I open sourced were a small contribution back to the community which has given me enormously in the past years. In fact, pretty much everything I have ever coded is built on top of open source one way or another. I owe a lot.
Even though I didn’t save the money myself this time, I know someone else for sure will in the future. I didn’t make a dent in the universe, but it doesn’t matter because I know that happiness comes in small doses.
]]>From customers viewpoint quality is almost always external, which means functional. High quality software does what it is supposed to do and perhaps even a bit more. Customer values software’s capability to provide the best possible customer experience as efficiently as possible. These external characteristics of quality can often be measured objectively, which makes it possible to set formal indicators for defining quality.
Another aspect of external quality is user experience. For example, how the user can achieve his or her goals with minimal effort. For service designers, this is one of the most interesting viewpoints to quality, but also highly valued by customer. We can measure user experience objectively to a certain degree with analytics, A/B tests, user tests and performance tests.
We coders view quality often from internal viewpoint. High quality code is simple, unambiguous, readable, concise, testable and modular. Abstractions and responsibilities between components are carefully though out and justified. Code maps out the underlying business domain, actors and functions precisely. It also communicates with other services over well defined interfaces. There is no bubblegum, glue or anything else that doesn’t belong to the codebase.
Beautiful code is like poetry - with the exception that code communicates the message without any room for interpretation. The message is unambiguous. When the code is backed by an appropriate architecture we have something remarkable at our hands, which will touch emotionally even the rugged developer. We are in the crossroads of craftsmanship, art and engineering. This is the junction where innovations that change the behavior of individuals and organization are created.
The tricky part in internal quality is the fact that it is subjective - making it difficult to be measured. Best that we have found is a group of rules of thumb and good practices, which all need to be still interpreted on a case by case basis. It is easy to state that “You will recognize good code, when you see it”. It’s difficult to argue with that statement. I’m not saying, that we should rely on formal methods for measuring code’s internal quality. That doesn’t justify the needs. The question then is, how can we justify the importance of internal quality to our customers, if we can’t point out the existence or absence of quality with clear indicators?
Code quality has no value on its own, but rather it has to reflect into our capability to produce better software. The concrete value of high quality code can be easily demonstrated with a few examples. First, high quality code is effective to maintain, which in turn enables faster development making it easier to adapt into market needs. Second, hight quality code is productive from an operational viewpoint as well. We can fix possible bugs rapidly and adapt business logic swiftly almost without any risk. Third, it is a joy to work in a high quality codebase. I am much happier working with elegant code, when the spaghetti code is not in my way. Happy employees are the backbone of every successful organization.
We have found out that there is external and internal quality. Software which has an elegant codebase, but doesn’t meet the functional expectations can’t be perceived as a high quality product. Same applies vice versa. Even if the software would surpass the highest user expectations, we can’t consider it as hight quality if the internal structures of the code are chaotic.
Balancing between external and internal quality is always a decision which should be taken on a case by case basis. Far too often we end up overvaluing wither external or internal quality, and we can’t find the optimal balance considering the totality. It might be justified to take debt from internal quality, if that increases user experience dramatically and the need for change id urgent.
We should still keep in mind, that the debt taken from internal quality must be paid back some day or the burden of the debt will cripple our software’s capability to adapt to ever-changing requirements. And if we can say something for sure, it’s the fact that the world changes every single day.
This post was originally published in Finnish in LinkedIn Pulse. Photo credit for Zephyris in Wikipedia
]]>Sometimes the proper way to solve a problem is to use pen-and-paper or apply process. However, when you need to get feedback on your code changes to a real system, then tools can provide leverage and benefit if you are not co-located with your stakeholder. We need something to be able to share the local development environment as easy as Amazon’s one-click ordering. Luckily there is a good range of options available nowadays to help us keep the feedback loop as short as possible.
Aside from being a simple and powerful tool for configuring your local development environment to match production, Vagrant can also share your local box to the world with a single command vagrant share
.
I have setup a simple Vagrant box with nginx, node and mongo, which is used here for demonstration purposes. In this specific case we want to forward https port, which requires the extra params vagrant share --https §PORT
1 | # clone example vm box and boot it up |
1 | $ curl --insecure https://selfish-panda-4911.vagrantshare.com |
That is quite simple, isn’t it? In my next post I plan to drill down a bit deeper on methods for keeping the feedback loop short.
]]>