Weekly developer news – October 13th 2017

So, welcome to the 3rd edition of developer news!

Again, as with last week, I could have included many more items, but have tried to limit it to the top 5. As I said before, if interest does continue, I am considering some other formats for daily news and commentary in addition to this weekly post.

So, here we go again:

1 : Microsoft stops development on Windows 10 Mobile

In a series of Tweets Microsoft’s Joe Belfiore confirmed that Windows Mobile 10 is now effectively in maintenance mode, with only security and bug fixes being performed. No feature development or hardware development is going to take place.

For more information, including details of the Tweets, checkout one of many write-ups here.

2: CSV Injection Demonstrated

I wanted to highlight a really great article by George Mauer, highlighting some potential security issues with CSV field import and export.

I don’t know about you, but a pretty decent number of systems that I have worked on, and do continue to work on have some form of ‘spreadsheet export/import’ feature, and as developers, we like to keep things simple and go with a CSV import.

After all every spreadsheet program, offline and online accepts CSV, and they are safe aren’t they?

Well, this article demonstrates they aren’t always immune to injection attacks, and we should think of security when dealing with CSV data.

Checkout the article here.

3 : Oculus Go

At their Oculus Connect 4 conference, Oculus have announced their upcoming Oculus Go device. (yes I’m aware I used the word Oculus 3 times in that sentence!)

It’s a device they describe as defining a new category of VR devices, that of 100% standalone devices.

As it sounds, this device does not need to be tethered to a PC, or paired with a mobile phone to operate. It is 100% standalone.

From a developer perspective, this could open up a number of interesting applications, and could potentially drive VR adoption, making it a more worthwhile platform to invest it.

For me, VR is something I’m keeping a casual eye on. AR is something I think has more potential, but am still in the monitoring stages before investing in any particular platform.

For more details, checkout their announcement here.

4 : Bancor / Ethereum flaw in detail

Hackernoon has a great writeup of a flaw gaining a lot of press coverage in Bancor, a high profile smart contract running on the Ethereum platform.

As Blockchains and smart contracts seem to gather increasing interest (or hype), as a developer, it’s interesting to see a demonstration of how careful we must be when deploying code as a distributed smart contract.

It has serious implications.

This article demonstrates that in merely 150 lines of Python, it’s possible to exploit a flaw in the smart contract to monitor trades on the platform, and guarantee that you can sell tokens at a high price than you are purchasing them.

There’s a lot of good technical detail in the article, and even if you aren’t actively planning on developing blockchain applications, it’s well worth a read, and if you are, it’s a cautionary tale on being extremely careful with how you develop and deploy smart contracts.

Checkout the article here.

5 : Friday the 13th coding horror

The final item for this week, is a developer generously sharing some code that they created early on in their development life. This is a codebase, the developer self describes as “An incomprehensible hellscape of spaghetti code.”

This repo is getting a lot of discussion on reddit and hacker news, and it’s really great to see discussion of what doesn’t work in a codebase, in addition to how we should be doing things.

I believe that especially with the way technology advances so quickly, there are many many ‘right’ solutions to a problem, and developers are often far too quick to declare their way / favourite stack as the only way to do things.

I think that actually, we can learn an awful lot by looking at what doesn’t work, an understanding why, so it’s great to see discussions like this going on.

Are you brave enough to share your coding mistakes?

So, that’s it for this week. If you have any articles, announcements, tutorials, or anything else you think should be included next week, then just drop me an email.

Weekly developer news – October 6th 2017

So, welcome to the 2nd edition of developer news!

It’s only the second edition, and I have already broken a rule I established last time. I’ve included 6 items this week, going over my previous limit of 5.

To be honest, I could have included many more items. I don’t know if was just because I had this post in mind, or whether it was a busy week, but it seemed I encountered so many interesting news items, articles, and tutorials this week.

I may mix up some formats if that trend continues, posting a weekly summary here, and a more realtime feed elsewhere, but more on that in another post..

So, without further ado, here we go:

1 : Apple iMac pro announced

Yes, I know, not everyone uses a Mac for development, but to be honest, most developers I encounter seem to.

And even if you don’t, Apple’s announcement demonstrates a seriously capable machine, whether you work on some intensive applications, like spinning up ridiculous numbers of docker containers, regularly perform video editing, or want to get into VR development, this machine seems more than capable.

It sports up to 18 cores of processing power, an all new Radeon Vega graphics card, and up to 128GB of RAM, so should handle most workloads without blinking.

For more details, see their announcement page here.

2: Alibaba Java coding guidelines

Alibaba has open sourced their Java coding guidelines.

I always find it interesting when an organisation does this. It’s an opportunity to both review my own opinion on coding best practices as well as gain an insight into how other organisations approach coding guidelines and code review.

I must say that it’s well structured in that they differentiate between mandatory and recommended practices, but their list is absolutely huge. I find it hard to see how any developer could remember to adhere to all of those guidelines, or review other people’s work against them.

Are coding guidelines something you find useful?

Checkout the full list for yourself on their GitHub repo.

3 : Strangeloop 2017 videos

Videos from the Strangeloop 2017 conference have started to be uploaded to their YouTube channel.

If you aren’t familiar with the conference, it’s a conference covering various programming topics, including languages, databases, distributed systems, and security.

Whilst I have never attended in person, I always find the videos valuable. Some of the talk titles uploaded this year include “Zuul’s Journey to Non-Blocking”, “Keeping Time in Real Systems”, “Reduce: Architecting and scaling a new web app at the NY Times”

4 : Redis 4 planning to add streams

Streams, a concept popularised by Kafka are now on their way to being translated into a Redis 4.2 module. I know many people have looked to Redis as a backend for stream oriented, or event driven systems.

The problem is, without streams, developers have to do extra work to bridge the gap between the list-like data structures and the pub/sub capabilities, often resorting to a mix of technologies to achieve a reliable event driven system and emulate an append only event log.

For more details on the planned work, see this post.

5 : Keybase launches encrypted git

Keybase has announced support for truly private repositories. These are git repositories that offer end to end encryption, so that even Keybase themselves cannot inspect the contents of your repository.

I can imagine for many people who don’t want to support their own git infrastructure, yet don’t feel comfortable uploading company source code, or things like API keys, or other company secrets, this could be worth checking out.

For more details on how this works, take a look at their announcement.

6 : Preview of upcoming PHP 7.2 changes

Kinsta has a really well written and comprehensive article outlining the changes, and impact of changes to be released at the end of November in PHP 7.2.

If you are an active PHP developer, then this article gives a really good hands-on overview of what the changes actually mean.

So, that’s it for this week. If you have any articles, announcements, tutorials, or anything else you think should be included next week, then just drop me an email.

Weekly developer news – September 29th 2017

So, this is a different kind of post, and something I’ve been thinking of trying for a while.

At the moment, this is something I want to trial, and if people find this useful will continue to do this.

As part of my consulting work and teaching work I do with developers and development teams, I feel it’s vital that I stay on top of advanced in our industry.

Most technology developments are things that I just need to be aware of. Some are ones that I need to dig into more details with, and others are things that I go deep on, and adopt in my own work, or work I do with clients.

So in this series, I will summarise my own research and comment on what I feel are the top 5 most interesting developments, updates, and articles from the past week in software development.

So, without further ado, here we go:

1 : ReactJS Updates

After huge amounts of criticism and concern, React JS has switched license from BSD + Patents to MIT license in version 15.6.2, and also released a newer v16.0 build with some new features.

In v16.0, they describe the changes as “some long-standing feature requests, including fragments, error boundaries, portals, support for custom DOM attributes, improved server-side rendering, and reduced file size.”

To be honest, I think most developers are going to be more interested in the licence change than the new feature set, though it’s good to see development continue.

If the previous license had you concerned, and planning switching to alternative frameworks, how do you feel know about the future of React? Do you now feel happy to continue using it?

For more details on this, checkout the react blog

2: Firefox Quantum

Mozilla has announced a beta of Firefox Quantum. This is a new Firefox build that includes what they describe as a completely reinvented modernized engine, offering significantly fast performance.

I know many developers that used to be Firefox fans switched long ago to Chrome, and also Safari for some, mainly due to performance, but also due to Firefox feeling dated.

What are your favourite browsers for development / general browsing? It will be interesting to see if the speed improvements are worth it, and lead to more of us going back to Firefox.

More details on their announcement are available on their site.

3 : New Alexa hardware and SDKs

Amazon has announced new Alexa hardware, including compact echo that includes a screen, plus Alex gadgets, a way to interact with all new Alex buttons, plus also build your own hardware that can interact with Alexa.

You can see more details on their development blog.

A link on the blog allows you to register to be notified when these new SDKs are available to use.

For me, Alexa, and audio interaction is something that I am keeping an eye on, and may also be releasing something on shortly.

As a developer, what’s your opinion on Alexa / voice interaction? Is this something you are building for now, or looking to do in future?

4 : TypeScript at Lyft

If you aren’t already aware, TypeScript is a superset of JavaScript that adds optional static typing to the language.

This comprehensive article from the engineering team at Lyft gives a good breakdown of their motivations for choosing TypeScript over plain JavaScript, as well as FlowType as well as digs into details of how they went about this and the benefits they have seen.

I know for me, even though I tend to stick with vanilla JS over TypeScript, given I tend to use Visual Studio Code for editing, I see the benefits of TypeScript day to day, purely through the autocompletions and hints the IDE provides based on TypeScript definitions.

5 : Unit testing Postgres

Simon McClive has written a really interesting article detailing an approach to unit testing changes at the database level.

If you are working with databases of any reasonable complexity, the schema will change over time, and his approach seems like a great way to introduce testing at that level.

For me this is something I would adopt in addition to many other types of tests, but it seems like this would be very valuable as a way to catch errors before any end to end tests ran, which at best, even if they do spot issues are likely to result in high level application errors that need to be debugged.

So that’s it for this first post of this type. I’m comitting to doing this weekly for the next 4 weeks, and if people find it useful am happy to continue further.

If you have any feedback at all, then please do let me know.

TDD Mindset == Confidence

Recently, I was talking with another developer about confidence when writing code, and what I felt lead me to be able to get changes out to production quickly.

He wanted to know what it was that gave me confidence that my changes

a) weren’t going to break anything, and
b) had the desired results

I mentioned my previous blog post where I talked about TDD / unit testing mindset, and gave some examples of tests that were written with the typical ‘I have to test this method/action’ kind of mindset, and talked about why that might not be such a good idea.

Now, I’m not saying that it’s easy to adopt a different mindset, but writing tests, and writing them well does require a different mindset to the one we adopt when writing the ‘real’ code.

Adopting a mindset is a hard thing to do. It is more than just instruction. It takes time, deliberate practice, and guidance, but once it clicks, it’s really powerful.

Here’s an example from development work I do with one of my clients.

That team that I am working with work in a pretty Kanban-like way. Often pretending it’s scrum, but basically each developer is working on their own stream of features, sometimes getting interrupted to work on defect fixes for issues found in production for unexpected conditions.

For both the day to day feature work, and the defect fixes, where there is often a lot of perceived urgency in getting a fix out, adopting the right mindset, and having the right approach to writing tests gives a massive boost to my performance.

It provides confidence. Confidence to make changes, knowing I haven’t broken any behaviour that I had previously explicitly called out, deliberately tested in isolation.

With the right set of tests in place, I can very very quickly create a new test, see this fail, make the change, repeating until the feature is done, or the defect is fixed.

By focusing on behaviour, by adopting the right mindset, I genuinely have no concerns about deploying changes to production in a matter of hours, if not minutes.

The tests are doing what they should do. They are helping me. Making me better. Making me faster. Making me more confident in my ability to deliver.

How do you feel about your tests, TDD, or unit testing that you do? Do you have a consistent approach and understand why that approach works?

Abstraction vs Duplication. Why DRY is bad advice

In this post, I want to challenge the DRY mantra.


Don’t Repeat Yourself!

This is something that we are fed on a seemingly continual basis.

No matter how you came into software engineering, whether through a computer science degree, or through less formal training, you have probably heard many people exclaim that we should not repeat ourselves.

The DRY principle is stated as

“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system”.

If for some reason, this has bypassed you, or if you haven’t read the book, then I would encourage you to checkout The Programatic Programmer, by Andrew Hunt and David Thomas.


There are some problems though with the way that DRY is often interpreted that I want to highlight.

It’s something that I see come up frequently to the detrement of good software design. It’s something that can cause more issues than it solves despite coming from a place of good intention.

If we go back to the wikipedia definition of DRY, we can see it talks about DRY vs WET solutions

Violations of DRY are typically referred to as WET solutions, which is commonly taken to stand for either “write everything twice”, “we enjoy typing” or “waste everyone’s time”.

This sounds fine in theory, but the problem is when we seek to avoid to “write everything twice”, sometimes this can become never write anything twice. And this is where I think the problem lies.

Instead of us seeking to avoid encoding the same knowledge in more than one place, it often manifests itself as never write the same or similar looking code twice.

And this causes problems when we have similar looking code, similar business processes in our code, similiar sequences of method or funciton calls that are only similar, not the same.

In my view, attempting to coerce them into some common place, so that we don’t have similar looking code, or the same line of code in two different places is the wrong thing to do. Far too often I see developers prematurely create abstractions out of fear of having duplicated code.

The problem is that these abstractions are often very weak abstractions, or only appear to be a meaningful abstraction at surface level.

I want to look at a couple of examples of this in action, firstly in application code, and then in test code.

Here is an example of some application code that certainly suffers from over abstraction, of being too DRY.

package com.seriouscompany.business.java.fizzbuzz.packagenamingpackage.impl.strategies;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import com.seriouscompany.business.java.fizzbuzz.packagenamingpackage.impl.strategies.adapters.LoopContextStateRetrievalToSingleStepOutputGenerationAdapter;
import com.seriouscompany.business.java.fizzbuzz.packagenamingpackage.interfaces.loop.LoopContextStateRetrieval;
import com.seriouscompany.business.java.fizzbuzz.packagenamingpackage.interfaces.loop.LoopPayloadExecution;
import com.seriouscompany.business.java.fizzbuzz.packagenamingpackage.interfaces.strategies.OutputGenerationStrategy;

@Service
public class SingleStepPayload implements LoopPayloadExecution {

	private final OutputGenerationStrategy _outputGenerationStrategy;

	@Autowired
	public SingleStepPayload(final OutputGenerationStrategy _outputGenerationStrategy) {
		super();
		this._outputGenerationStrategy = _outputGenerationStrategy;
	}

	@Override
	public void runLoopPayload(final LoopContextStateRetrieval stateRetrieval) {
		final LoopContextStateRetrievalToSingleStepOutputGenerationAdapter adapter =
				new LoopContextStateRetrievalToSingleStepOutputGenerationAdapter(stateRetrieval);
		this._outputGenerationStrategy.performGenerationForCurrentStep(adapter);
	}

}

 

This is from a version of the classic FizzBuzz problem.

It’s intended to poke fun at the common ‘enterprise’ Java code that exists, but it’s not too far removed from reality. It has all of the hallmarks of developers being too eager to generalise, to introduce abstractions.

Here’s an example of the same kind of problem from a different project, but this time in test code.

public class IntegrationTestUtil {

    public static final int AVAILABLE_SEATS = 20;


    public static Pair<Event, String> initEvent(List<TicketCategoryModification> categories,
                                                OrganizationRepository organizationRepository,
                                                UserManager userManager,
                                                EventManager eventManager) {

        String organizationName = UUID.randomUUID().toString();
        String username = UUID.randomUUID().toString();
        String eventName = UUID.randomUUID().toString();

        userManager.createOrganization(organizationName, "org", "email@example.com");
        Organization organization = organizationRepository.findByName(organizationName).get(0);
        userManager.insertUser(organization.getId(), username, "test", "test", "test@example.com", Role.OPERATOR, User.Type.INTERNAL);
        userManager.insertUser(organization.getId(), username+"_owner", "test", "test", "test@example.com", Role.OWNER, User.Type.INTERNAL);

        LocalDateTime expiration = LocalDateTime.now().plusDays(5).plusHours(1);

        Map<String, String> desc = new HashMap<>();
        desc.put("en", "muh description");
        desc.put("it", "muh description");
        desc.put("de", "muh description");

        EventModification em = new EventModification(null, Event.EventType.INTERNAL, "url", "url", "url", "url", null,
                eventName, "event display name", organization.getId(),
                "muh location", desc,
                new DateTimeModification(LocalDate.now().plusDays(5), LocalTime.now()),
                new DateTimeModification(expiration.toLocalDate(), expiration.toLocalTime()),
                BigDecimal.TEN, "CHF", AVAILABLE_SEATS, BigDecimal.ONE, true, Collections.singletonList(PaymentProxy.OFFLINE), categories, false, new LocationDescriptor("","","",""), 7, null, null);
        eventManager.createEvent(em);
        return Pair.of(eventManager.getSingleEvent(eventName, username), username);
    }

}

 

This is some test code taken from an open source project that I won’t name, and it’s a little different to the application code example. This time, we are looking at some so called ‘helper’ code that is used by many different tests.

There are less technical abstractions present, but there is still a high degree of coupling. We have common test setup methods that at first glance seem like they might provide a benefit to us. After all, it saves us from having to do all of that same setup in each test method.

The problem with this is though, that far too much setup has made it’s way into this common method. Every line of that shared test setup code is relevant to one or more of the test methods, but not all of that setup is revelant to every test.

This creates brittle and fragile tests, where for any given test method, we don’t know what input is relevant to the scenario being tested.

And, if we need to have more or slightly different input data for a particular test, then if we add to or modify that shared test setup, we don’t know what effect that might have on the other tests using this setup method.

I have seen test code with a large number of shared test setup helper methods get to the point over time that these shared methods have been modified so much, some of the tests are no longer testing what they original were, because of the lack of clarity of the relevant input data for each test.


To summarise the examples, they both suffer from the problem of developers being too keen on not duplicating code.

This has been taken to the point of not wanting to duplicate the same or similar set of method calls in a codebase, and has resulted in abstractions exisitng not to encapsulate something from our problem domain, but to simply avoid code duplication.

These are vague abstractions that increase complexity.

This kind of coupling can result in:

  • Developers being fearful of changing the common code in case it breaks some unknown part of the system.
  • Excessive parameters being added to methods to control different branches or use cases of the shared code.
  • Logic relating to a particular use or case or area of business logic becomes fragmented, and distributed between classes.

Signs that we should be duplicating code

Thankfully though, there are some things to look for in our codebase that could indicate we are creating an artificial abstraction that might increase rather than reduce our complexity.

These traits in your codebase don’t necessarily mean you have this problem, but should be indicators you need to pause for a minute and work out whether you are heading in this direction

Some of these are:

  • Classes with only one methods
  • Classes with methods that are named pretty much the same as the class itself
  • Big if statements in methods with a parameter used to control which branch executes
  • Classes named after what they do rather than what they represent i.e. named after the code they call, and typically contain suffixes such as ‘Orchestrator’, ‘Manager’, ‘Utility’, ‘Helper’ etc
  • Abstractions that exist for ‘future’ benefit e.g. to allow a part of the system to be swapped out in future.
  • Large amounts of configuration required just to wire up dependencies

So, what should we do?

So to conclude, if you see yourself spotting traits like this in your codebase, then I would recommend you start off by duplicating code instead of creating abstractions.

Then, once you have a good understanding of how the different parts of your codebase use the duplicated code, you can decide whether or not there is a genuine common abstraction that should exist.

Sometimes there may well be a valid abstractions that should exist to encapsulate something about your problem domain and it’s behaviour.

Often though, we don’t actually need to introduce new abstractions. Remember that they are not cost free. They introduce coupling and complexity.

In my experience it’s almost always cheaper and easier to fix the codebase, starting from a point of having a small amount of duplicated code rather than try to deal after the fact with high complexity and coupling with the wrong abstractions in place where you have to try and unpick the shared code.

Duplication is far far cheaper than the wrong abstraction.

Code review goals and merits, and why you might not always need to do it

I want to talk about code reviews.

Pretty much every job I have ever had as a software developer and every client I have worked with has had some kind of code review process.

A simple search for the term code review yields many results, so what do I think I can add to the conversation?

Well, most of these search results either relate to specific things we should be doing or looking for in a code review, or are for tooling to supposedly allow us to review code better, or to actually force us to do a review.

Most organisations though, aren’t talking about why we do code reviews. I am a firm believer in equipping myself with a set of tools as a developer, and then having the ability to make intelligent selection of the right set of tools to use in a given situation, given a set of goals I am trying to meet. Tools such as TDD, unit testing, integration testing, pair programming, and, yes, code reviews.

Moat organisations and development departments are keen to put a code review step in place, probably as a mandatory step in Jira in between development and test.

After all, it’s pretty hard for anyone to argue that code review is a bad thing.

My problem though is that this can often leave us with a process that just feels good. It feels like we are doing the right thing by having a mandatory review step.

But, if we don’t agree on what purpose this step in our development process is serving, then how do we know it is serving this purpose, and therefore a valuable thing to be doing?

And, as a reviewer, how do I know what I should be checking?

Should I be running the code?

Just inspecting the code?

And what should I look for when I do inspect it?

If I do have any questions or comments, then does the original developer have to address all of them?

So, what is the point?

I would like to start by working out what some of the most common reasons people cite are for doing code reviews, and use this as a starting point.

Once we have this, I would then like to look at a number of different tools that we might use to achieve those cited benefits.

I feel that code review is almost always used dogmatically, without a shared purpose in mind. If we equip ourselves with a shared view on what we are setting out to achieve, then we may find that code review isn’t always the right tool.

And when it is the right tool, we can deploy it far more intelligently, more precisely and effectively with confidence we are looking for the right things.

OK, OK, I get it, so why do we do code reviews?

Looking at some of the popular search results for ‘code review’, we can see many posts providing us with a set of guidelines we should or could follow.

For example, in a post that lists ‘10 best practices for code review’, we see that the very first two points tell us how many lines of code we should be reviewing at once, and the rate at which we should be reading these lines of code.

The reasons for this, we are told, are because deviating from this will affect our defect detection rate.

Ah, ok, so code reviews are about detecting defects then?

There are other points, and indeed many other articles, that all focus on code review from the perspective of being an exercise in detecting defects in code before it goes live.

For many people, this is the sole reason for performing code review.

There are many other posts, that tell us there could be other reasons for doing code reviews.

The most common ones are:

  • To ensure that appropraite ‘coding standards’ are maintained
  • It’s a learning exercise for the reviewer, or a way of sharing knowledge
  • It helps to detect bad or wrong design
  • It encourages collaboration
  • It ensures non functional requirements such as security or performance have been met
  • It ensures the appropriate tests exist alongside the code

So how do we do code review?

So, with 90% of the developers and teams I have worked with, it goes something like this:

  1. A developer finishes work on something, raises a merge request, and moves the Jira card from ‘Development’ to ‘Review’
  2. The original developer will either assign the merge request or review to someone explicitly, or request that someone who feels like reviewing this should pick up this review task
  3. The reviewer will independently look at the code in the merge request, and make a number of comments against the changes in the merge request
  4. The original developer and reviewer go back and forward a few times, making changes to address comments, re-reviewing etc
  5. Once the reviewer is happy, they will either accept the merge request (and responsibility of it), or let the original developer know that they are happy for the request to be merged
  6. The Jira card moves along to the next step, typically some kind of ‘Ready to test’ or ‘Test’ state

This is how it tends to happen most of the time.

Sometimes developers will sit together to walk through the review.
Sometimes there are more formal review meetings.

But in my experience, this is how most people implement a code review process.

Does this process work?

It can do. I have seen it work, but often there are a number of issues with this kind of workflow.

One of the biggest problems, as mentioned before, is that people go into this with the wrong mindset. They perform code reviews, mainly as an unfocused feel-good exercise. Yes, good reviewers will catch issues, give good feedback etc. But this happens unreliably because most people don’t have a shared view as to what they are trying to achieve with these reviews.

Another big issue is that, especially when code reviews are the time that feedback on design is given is that often this feedback comes to late to do anything productive with it.

It’s pretty demotivating to find out after having written a lot of code that someone else thinks we should change it all. No one wants to be in this position, and to be honest most people wouldn’t want to have to tell someone this either. It’s far too late for code review to be an effective way to validate design. This leaves you with either two developers feeling uncomfortable, or sub-optimal design getting through.

Also, due to the asynchronous nature of a code review, often by the time feedback is given, it’s an interruption to the thing the developer is trying to focus on.

Note, I’m not saying don’t code review. Please please do code review, but do it intelligently, with the pitfalls in mind.

So, what should we do?

Treat code review more like a tool used to achieve something, rather than a process.

This means first of all, working out, and agreeing on the goals you have, the things you want to be able to verify and try to protect against.

So, you probably want to agree on design standards, coding standards, what your approach to testing is, what kind of non functional requirements you have.

Once you have agreed these things, written them down somewhere, then for each change, rather than trying to cover all of those bases at once, after the code has been written, think about how and when you can ensure those goals are met for any given change.

Decide which tool or tools you can deploy most effectively to verify the things you care about for any given change.

The key thing is, some changes may require more or less review, at different points in time. For some changes you may want to discuss and decide on the design before doing any code. Sometimes part way through coding design choices come up. Some changes are so simple it isn’t an issue.

Sometimes, the way to ensure coding standards could be through the appropriate use of static code analysis tools.

Sometimes it could be more effective to just pair program the change with someone who knows the area of code being changed well.

And if traditional pair programming isn’t right, then how about cooperative pair programming?

For knowledge transfer, maybe code review isn’t the best tool to use. If learning is a goal, then don’t expect someone to learn how something works and at the same time be capable of giving critical feedback.

If you approach this from place of goals, the needs you want to meet, not from process, and recognise code review is a tool, then it frees you to make intelligent choices on a case by case basis, with full knowledge of what your goals and standards are.

So, it’s not a step in a Jira driven process. It’s a way to make sure the standards you set for yourself (and agree on) are met.

Hopefully this should mean then when and where you do decide to have someone review code, you do it at the right time, and both parties know exactly what to expect, and can actively engage with the process, and see it as a good thing, providing benefit. Something more collaborative than moving a Jira card along and hoping someone will rubber stamp it.

References

Credit goes to the people and pages linked below. Check them out, but remember to work out what you are trying to achieve in your specific situation

https://github.com/thoughtbot/guides/tree/master/code-review

https://blog.fogcreek.com/increase-defect-detection-with-our-code-review-checklist-example/

https://msdn.microsoft.com/en-us/library/ms182019(v=vs.100).aspx

http://jibbering.com/faq/notes/review/

https://www.atlassian.com/agile/code-reviews

https://cwiki.apache.org/confluence/display/QUICKSTEP/Code+Review+Guidelines

https://mozillascience.github.io/codeReview/review.html

https://everydayrails.com/2017/01/16/code-review-mindset.html

https://blog.mavenhive.in/pair-programming-vs-code-reviews-79f0f1bf926

https://developers.slashdot.org/story/16/01/21/1749247/code-reviews-vs-pair-programming

https://arxiv.org/abs/1706.02062

https://collaboration.csc.ncsu.edu/laurie/Papers/XPSardinia.PDF

http://engineering.pivotal.io/post/pair-programming-in-a-distributed-team/

https://smartbear.com/learn/code-review/best-practices-for-peer-code-review/

http://www.evoketechnologies.com/blog/code-review-checklist-perform-effective-code-reviews/

Your unit tests are probably slowing you down

If you are looking at this post, you are probably interested in test driven development, or at the very least unit testing.

TDD, unit testing, and other levels of testing, are, in my mind, just a set tools we can use to help us write better software quicker.

I’m not going to come down hard on people and say thou shalt only write code if there’s already a failing test. There are many times writing tests first is the wrong thing to do.

That said, I think TDD is a tool that is often under used.

I think it’s under used for one reason. And, it’s the same reason I see so many unit tests that are actually harming your development not helping it. These are tests that at surface level seem like they are good tests, adding value, but on closer inspection are actually hurting your development, and just storing up problems for you, and actually even in the short to medium term, slowing your team down.

So, what is this reason?

Well, I believe it’s that for the most part, most developers, even some of the great developers out there, are actually approaching their tests with the wrong mindset, thinking about their tests in the wrong way.

And, by the way, I don’t blame them for this.

In the world of information overload that we live in, I see so many well intentioned blog posts, describing TDD, unit testing, or even just software development in a way that is overly simplified. The problem with this, is that people then come away with the impression that this simplified view of the world covers everything they need to know. That’s all there is to doing TDD.

It’s not helped by the fact that some of the downsides to the tests that most people end up with aren’t actually seen until a little further down the road, when they try to build upon or refactor the functionality they have ended up with.

Then they are in a world of pain, usually with large amounts of tests failing, no one knowing whether the tests or the code is wrong, and spurious bugs because of this.

The developers that I have seen though, that have learnt themselves, or through others like myself coaching them to adopt the right mindset when approaching TDD, are able to work quicker both in the short term and the long term, where tests speed up their work, not slow it down, avoiding problems of brittle tests failing for unknown reasons.

So, if mindset is the important, then what is the wrong mindset, and how should you think about tests instead?

So, I want to describe this by showing an example. This is taken from an open source project that I won’t name, because I don’t want to make it sound like I’m being critical of the person that created it. As I said before, there’s so much information out there that tells us that this is all we should do.

The code that this is testing is from a TODO list application, something that’s often used as an example application because there’s enough complexity there to be interesting, but not too much.

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

 

So looking at this test, at first glance, it might seem OK. It’s testing that we can PUT a new task. It’s a pretty small test, so we can read it relatively easily.

The test name is ok. Again, it seems to indicate this test is about PUTting a new task. It also ends with ‘200’ which I think we might assume is the expected response code. So, I assume from the name that this is testing the successful test case.

Aside from the name giving us a few things we have to think about or assume, there are some other problems with this test, and tests like this.

This test is more brittle than it should be, and from looking at this and other surrounding tests, I can see that the approach taken to testing, which is the most common approach I see, is one in which the developer is clearly focused on the method, or action being invoked in the test, in this case the PUT method.

Now, you might not see a problem with this, but bear with me.

Instead of focusing on the method or action being tested, I think that we should instead focus on the behaviour of the system under test, which might seem like a different way of describing the same thing. But it’s not. It is different.

Through my own experience and that of others I have coached, I have seen much better results, much more maintainable code, giving the ability to work much quicker at a higher quality when adopting a more purposeful mindset. And no, I’m not talking about BDD syntax here.

So, breaking this test down, let’s look at what it is doing line by line.

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

So, here, we are creating a PUT request. That’s great, that’s our action being tested.

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

We have some input data here – an object with a title field

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

Now, our first assertion, we are making sure that we get a 200 response back

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

Now, another assertion, this time, asserting that no error is returned, but also unfortunately losing those error details.

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

Next, we are making an additional request. This time, a GET request to retrieve the newly created task

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

Here, again, we have another assertion, making sure we get a 200 response back

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

And another again, ensuring there is no error.

it('should PUT /tasks/:id 200', function (done) {
    request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, function (err) {
      assert(err == undefined);
      request(app)
      .get('/tasks')
      .expect(200)
      .end(function (err, res) {
        assert(err == undefined);
        assert(res.body[0].title === 'foo');
        done();
      });
    });
  });

And finally, another assertion, making sure the task title is the one we sent in the PUT request.

So, from this set of assertions, it seems our assumptions about this test are correct. It is testing the success case.

The problem though, is that this test is actually testing many many things all at once. It is testing multiple behaviours in the same test method. We can see this by the number of assertions we have, but also the types of assertions.

Just because a behaviour of the system is triggered by one particular method, with a particular input, does not mean that it should be verified in the same test as a different behaviour.

We also have coupling. The way this test verifies the task has been created is via the GET method, which in turn has a couple of assertions, relating to status code etc.

If the GET method code became broken for any reason, then this test would fail, even if the code relating to task creation was working perfectly.

Taking a more behaviour oriented, purposeful approach to thinking about the PUT method would result in more tests, each with a more clearly defined purpose e.g. rather than the test we have seen, why not have

it('should return a successful response when attempting to PUT task title changes', function(done) {
request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .expect(200, done)
}

This test is only verifying the response, not any side effects.

We can then have a second test

it('should store the updated title against the given task when making a PUT request with a new task title', function(done) {
var datastore = this.datastore
var taskID = this.id
request(app)
    .put('/tasks/' + this.id)
    .send({title: 'foo'})
    .end(function () {
      datastore.updateTask.should.have.been.calledWith(taskID,{title:'foo'})
      done();
    });
}

 

This test should verify the data store interaction required to update a task’s title, probably using a fake or a mock version of the data store. It should only verify this. We don’t need to repeat assertions for the 200 response that we already have in the new test above

In addition to these tests, we probably want other test cases too, like what response should be received if we give an invalid task ID, what happens if we attempt to modify a task with no new task data etc

So, in this example, we have taken 1 test, and by taking a more behaviour oriented approach have done a couple of things.

We have split it into 2 tests, to decouple verifying our different behaviours

1 – the HTTP handling and response generation
2 – the interaction with the data store, the business logic if you like of updating the task

We have also decoupled the test relating to PUTting tasks from the GET request. We no longer need to make a GET request to verify our behaviour.

These changes might seem subtle, but shifting your approach and mindset in this way, will lead to test improvements in terms of having more obvious, clearly defined tests that are not as brittle, leading to a more maintainable system going forward.

I go into much more detail of this technique, and other principles and strategies you can apply to your own unit testing and test driven development in my new 6 week course, TDD Made Easy.

Code That’s Trending: What You Should Be Learning In 2017

Only a little over two decades ago, the biggest technological innovation was IBM’s one-gigabyte hard drive (retailing at a mere $3,000) and the introduction of Windows 95.

To say that tech has changed in the last 20 years is an understatement.

But what will technology look like 20 years from now? Will the war between iOS and Android be redundant? Will fully functioning robots finally be a thing?

It’s hard to say for sure, but the reality for those who work in technology – in this case, programming and coding – things will most certainly change. In fact, things have already started to shift in 2017.

For coders looking to take advantage of some of the new tech, languages, frameworks, and programming strategies out there, here are a few key places to start.

[ Content upgrade with ID = 383 not found ]

Preprocessors

CSS preprocessors – like LESS and SASS – have been around for awhile now, but devs have finally been catching on to their real potential in the last few years, and their popularity will most likely continue to grow over the coming year.

In the past, full language stacks were all the rage, and coders would spend hours building everything from scratch. But then someone figured out that you could build on the work of others by using CSS to extend basic functionality, and preprocessors were born.

Preprocessors can take a chosen language and use it as the foundation for larger projects. Some of the other advantages include:

  1. Modularization for your styles
  2. Reduced redundancy with variables and mixins
  3. Code reuse across multiple projects
  4. Nested, Smart styles

Of course, if you’ve never worked with preprocessors before, know that there is a bit of a learning curve. Because they have their own syntax, you’ll most likely need to choose a single preprocessor to learn first and then move on to the others from there. If you have a good foundation in JavaScript and CSS, you’ll have a solid head start.

JavaScript MV* frameworks

JavaScript frameworks are another underutilized area of development that may see more playing time in the coming year.

Consider these frameworks – called MV* frameworks (model-view-wildcard) as a sort of “next step” to working with vanilla JavaScript files (plain .js libraries sans jQuery).

Popular frameworks include Kendo, Sencha, jQuery Mobile, AngularJS, Ember, Backbone, and Meteor JS, though there are more. Part of their job is to handle some of the more complicated processes of HTML5 Web Apps and to help developers work across different platforms, like desktop to mobile.

Since Web Apps using HTML5 and cross-platform development are both on the rise in the tech world, it makes sense for developers to point their ears in that direction as well.

Single-Page Web Apps

As long as you’re learning how to work with JavaScript frameworks, you may as well learn more about the intricacies of Single-Page Web Apps (SPAs) too.

While SPAs still lag behind traditional web pages in terms of popularity, their use has been growing over the past several years.

An SLA is essentially a single, responsive landing page that works just like an app. The front end of the page pulls from a much larger database filled with content without the developer needing to add markups to every piece of data, so the page stays regularly updated with minimal effort.

They’re also much faster than traditional pages, as the HTML, CSS, and scripts only load once throughout the lifespan of the application. Only data is transmitted back and forth, so bandwidth is reduced and page speed is increased.

Considering how much time and energy is spent by developers creating, updating, and maintaining web pages, you can probably see why SLAs are catching on.

SVG + JavaScript on Canvas

Many tech prophets have been proclaiming the Death of Flash for a while now, so it’s no surprise that it’s on the outs. But Flash has been an integral part of web development for a long time now, and many devs love the look and function of it.

The answer is HTML5 Canvas, which uses SVG (Scalable Vector Graphics) and JavaScript as building blocks to create a better animation.

Developers can essentially “draw” elements using JavaScript code instead of going through the rigorous process of creating animations using Flash.

This process is significantly better for mobile animations, as many devices require scalable graphics to function properly on small screens (and SVG is known for scalability).

With mobile animation on the up-and-up, developers will need to find better ways to create animation that works on a variety of different devices both large and small. So far, SVG and JavaScript on canvas is the best solution.

Machine Learning

When we say “machine learning” we don’t necessarily mean robots taking over the world like The Matrix. Machine learning in web development refers to Artificial Intelligence (AI) algorithms that give programs the ability to learn without being programmed.

Machine learning is incorporated in flashy, headline-grabbing technology like Google’s self-driving car, but it can also handle tasks like tracking activity in one application and relaying that to a different one (say, what you watched on Netflix to what you buy on Amazon).

As of right now, machine learning is still relatively in its infancy. Tools using machine learning run the gamut from performing simple computing tasks all the way up to problem solving tech like Amazon’s Echo or even Apple’s Siri.

As the demand for this sort of “smart technology” continues to grow, so will the demand for developers who understand how to use it properly.

Basically, if you want to make sure your job is relevant in the next 20 years, familiarize yourself with machine learning and you’ll most likely never be out of a job.

[ Content upgrade with ID = 383 not found ]

Final Thoughts

Of course, not all the trends listed here are for the faint of heart, and a modicum of knowledge in any will require some effort and dedication to learning. But that doesn’t mean that these trends are outside the reach of the beginner coder.

The key to successfully following the latest tech trends is to start with the basics: learn JavaScript, learn HTML, learn how to build a website.

Once you’ve mastered the “vanilla” levels of coding, you can move on to something with a little more flavor. And who knows, maybe someday you’ll be the one inventing the latest and greatest technology trend on the market.

 

Should Developers Learn to Design? How to Code for Looks

The relationship between designers and developers often gets a bad rap, but the two roles frequently run parallel.

Designers are often encouraged to learn code to better understand what happens behind the scenes, and some developers actually start as designers and work their way into becoming a hybrid of both roles.

But what about developers who have no love for art? Should they learn to design too?

There are certainly enough jobs available for a pure developer who wants to stick with coding, but there’s a good case to be made for developers – even those that lack an artistic flare – to learn how to design, too.

[ Content upgrade with ID = 370 not found ]

desk-office-pen-ruler

Learning Empathy

Developers should learn how to design for the same reason designers are encouraged to learn code – empathy.

According to Stephen Caver from Happy Cog:

“The primary reason any developer should learn design is to gain empathy for the designers with whom they work. Nothing is more toxic to a project than developers and designers seeing each other as rivals.”

Developers and designers often approach projects from completely different perspectives, and both are necessary for a great user experience.

But for a developer who doesn’t understand the creative process, it can be difficult to problem solve for designers who want to implement creative strategies. You might think, “Why are we doing this?” instead of, “How can we do this?”

Learning to think like a designer will help you overcome obstacles and create solutions faster. In fact, according to ex-designer Mark Kawano, tech giant Apple encourages developers to “think like a designer.”

“I think the biggest misconception is this belief that the reason Apple products turn out to be designed better, and have a better user experience, or are sexier, or whatever … is that they have the best design team in the world, or the best process in the world,” he says.

Thinking like a designer can also help developers communicate more efficiently and collaborate on projects with less interference.

14804632142_38c5c63424_b

How to Think Like a Designer

UX/UI designer Drew Lepp argues that there are actually two definitions of a designer:

  • Someone who designs
  • Someone who is creative with a purpose

Even if a developer doesn’t fall into the first category, they should strive to fall into the latter. According to Drew, there are six ways to be creative with a purpose:

  1. Strive to do better
  2. Be relentlessly optimistic
  3. Dream big
  4. Have empathy
  5. Be comfortable with the uncomfortable
  6. Bring clarity to complex ideas

Thinking like a designer is essentially about looking at a project with broader scope so that you’re not just following directions, you’re innovating. It’s about understanding the needs, desires, problems and aspirations of a business so that you can experiment with forward-thinking solutions.

But thinking like a designer is also about understanding how aesthetics play a role in the final product.

It’s about focusing on things like the look and feel of a website, how color psychology affects the end user, how to unclutter a website for easier navigation, and how to organize elements to net the highest conversions.

Learning the principles of design can help developers become better coders, and coding for visual appeal and user experience will improve your chances of building a high converting site or application the first time around.

You may also have the opportunity to help clients who don’t know what they want. You will be able to clarify confusing elements of a project to them as well as understand their wants and needs better than code along can communicate.

Basically, thinking like a designer will help you become more well-rounded in your profession, helping you work better in teams and one-on-one with clients.

web-design-1419696_1280

Where to Learn Design

So how does a developer learn to code for looks?

One of the quickest ways to learn something is to ask someone who has done it. This means picking the brains of other designers or hybrids and taking note of what matters.

Another way to is to find a course or program that specifically teaches design. David Kadavy at Design for Hackers has a program geared toward developers, and places like Lynda.com specialize in design programs. Or you can find design tutorials on EnvatoTuts+.

But what sort of things should you focus on?

Even if you don’t want to learn how to design a whole website from scratch, you should understand design principles from areas including:

  • Font – how to choose the right fonts for headers, subheaders, body copy, and accents
  • Sizing – how to size elements like fonts and images to stand out on a page
  • Color – how color psychology affects conversions and how to choose an appropriate color palette
  • White space – how to space distances between elements and how to get rid of unnecessary clutter
  • User experience – where to place design elements like pop-ups and navigation to improve UX
  • Copy – how to use copy to inform and move users through the site effectively

Having a general understanding of these principles will help you communicate with designers (and clients) who make design requests to you that seem unreasonable or confusing.

Again, you will understand the “why” behind the choices so you know how to respond if something simply won’t work from a development perspective, too.

[ Content upgrade with ID = 370 not found ]

Final Thoughts

Just because you learn the principles of design doesn’t mean you have to become a designer. In fact, there are some out there who believe that it’s better not to hire or work with a designer/developer hybrid (though others will argue the opposite).

But you don’t have to work as a designer to think like one. Understanding the basic principles of design, how to communicate like a designer, and how to implement creative solutions to improve user experience is a boon for any developer.

You don’t have to be artsy to think like a designer either. Creativity comes in all shapes and sizes, and more often than not, clients will be looking for developers who understand creative concepts and solve their problem in an equally creative fashion.