David Vedvick

It's been coded

Concurrency vs. Parallelism: A breakfast example

One of the harder problems in Computer Science is concurrency. This summer at my employer, I gave a presentation on asynchrony, which I consider nearly the same thing as concurrency. One of the things that I felt was lacking was my explanation of concurrency vs. parallelism. I ended up just giving a textbook explanation:

  • Parallel processing is taking advantage of a lot of processors (local or remote) to run calculations on large volumes of data
  • Asynchronous execution is freeing up the processor to do other things while a lengthy operation is occurring

This morning, however, I was making myself breakfast, and I thought up a useful analogy.

A breakfast example

My breakfast this morning consisted of a coffee and two slices of toast with peanut butter.

A perfect example of concurrency was how I made the breakfast: I first started the toast, then put the coffee cup in the Keurig and pushed the brew button on the Keurig. This is a concurrent operation - one job (toasting the bread) was started, and after that began, I (the processor) was freed up to start another job, the "brew coffee" operation.

We can take this analogy further: the toaster can actually process two pieces of bread at once, which is a parallel operation. From here, we can easily see that parallelism is a subset of concurrency: technically, the toaster is technically performing two operations concurrently, what makes it a parallel operation is the fact that it's the same process occurring twice, started at the same time within the same machine.

Should we write this as C#?

public static class Program {
  public static void Main() {

  public static async Task MakeBreakfast() {
    // Start toast - this operation takes the longest to complete, so let's get
    // it started as soon as possible
    var toaster = new StandardToaster(new ElectricitySupplier());
    var toastingTask = toaster.Toast(new WheatBread(), new WheatBread());

    // Now start the Keurig, a relatively short operation
    var brewer = new Keurig(new Water());
    var brewingTask = brewer.Brew(new DarkCoffee());

    // Don't return control to the human until operations complete
    await Task.WhenAll(toastingTask, brewingTask);

  public interface Toaster {
    public Task<IEnumerable<Toast>> Toast(Bread bread, Bread bread);

  public interface Brewer {
    public Task<Coffee> Brew(GroundCoffee groundCoffee);

Concurrency and Parallelism in Real Life

The thing about concurrency and parallelism is that you can do this all the time; for example, humans are terrible at multi-tasking (parallel processing), but are great at starting multiple jobs, and then taking action when they finish (concurrent processing).

I encourage everyone to always think of how the things they do in real life apply to different concepts in software development. Since software development is all about automating real life processes, these analogies actually occur much more frequently than one would expect!

Note posted on Sunday, December 3, 2017 11:45 AM CST - link

What is Software Engineering and are we Software Engineers?

In our day jobs, we often call ourselves many things (or our HR departments call us these terms for us):

  • Software Developer
  • Programmer
  • Software Designer
  • Software Engineer
  • Technologist (what does this even mean?)

I think most of us prefer the term "Software Engineer" at the end of the day; it tends to properly transmit the problem-solving needs, due diligence, and rigor required in our job, even if we don't apply those three traits all the time. We even may work with people ensuring the validity (and thus quality) of our software, who hold the title of "Software Quality Engineer".

But what really defines a Software Engineer? It's not enough to just have a title that gives a good feeling of the difficulties our job entails - is it? Well this question is answered easily enough - a Software Engineer is someone who practices Software Engineering!

But then the next practical question becomes... what is Software Engineering? Wikipedia gives Software Engineering five possible definitions, which I'll repeat here for convenience:

  • "research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications."

  • "the systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software";

  • "the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software";

  • "an engineering discipline that is concerned with all aspects of software production";

  • "the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines."

Ok, so many of these definitions look somewhat flimsy on the face of it; perhaps for proper definition, one should look at what gave birth to other engineering disciplines.

The Birth of an Engineering Discipline

Mary Shaw from Carnegie Mellon University in a keynote address at the International Conference on Software Engineering, determined that an engineering discipline emerges following these rough steps:

From craft practice to engineering discipline A diagram detailing the process a craft practice goes through to become an engineering discipline

  1. People begin using a new discovery or tool to solve some problems in simpler, newer, faster ways; this is the emergence of a craft practice around this discovery - say the "The emerging field of <x>" - or The emerging field of Computer Science in the late 1950s through the 1980s.
  2. Some of these people start businesses with their new, disruptive products, others are hired to disrupt existing businesses
  3. The businesses begin running into problems with the new products, eventually practitioners of the craft in the field cannot solve problems of ever increasing complexity alone, spurring the need for research into how solve these problems
  4. Scientists develop new practices and methodologies, make new discoveries in the field to solve problems; in order to properly disseminate these findings, the scientists use all tools available: documentation, training, etc.
  5. The findings of these discoveries eventually coalesce into a discipline; finally we have engineering!

Software development is far along the path of becoming an engineering practice; people use the computer sciences to solve common problems in business, government, and medical fields. As problems are found with current patterns and practices, subsequent solutions are found, and disbursed through many avenues (a common one in our field is Stack Overflow!).

Some tools and processes seem to introduce sea-changes in producing reliable code. The SOLID approach to object-oriented software design was one. From this, a whole literature has stemmed to produce consistently SOLID software designs. Peer review and test-driven development also seem like step-wise improvements in producing reliable software (by stressing as many variations of state that take place in a finite state machine as possible at time of development, a developer takes the step of not only ensuring correct operation of the code at the time of development, but also correct forward operation of the code in the future).

However, we aren't quite at the point where the science and tooling and practice sides of the equation have caught up to produce highly reliable code and solve novel problems at a high frequency.

If Building Software is not yet an Engineering Practice, are we Engineers?

Now comes the chicken and egg question: does an engineering practice make an engineer, or does an engineer make an engineering practice?

Instead of using the above process that Mary Shaw went through to define an engineering practice, let's look at what defines an engineer. From there we can maybe answer the question of what an engineer is without answering the question of what an engineering practice is.

Let's think of the situation of an electrical engineer and a certified electrician: both are capable of designing operating electrical circuits. Both are knowledgeable in the real-world limits and dangers of electrical equipment and components. An electrician can likely solder components onto a board just as quickly and deftly as an electrical engineer. In other words, they're both capable of understanding and applying circuit theory.

Where do they differ? What makes an electrical engineer's degree and certification harder to achieve? What do electrical engineers bring to the table that electricians do not? Perhaps the engineer solves novel electrical problems, but I think that an electrician is also capable of that when working within his own knowledge and what he has learned beyond that. Perhaps the engineer is tasked with staying at the forefront of his field, but a good electrician should also stay current with the field (and may be required to as well by government).

It seems more correct to say that electrical engineers (are supposed to) have the ability to contribute back to the fields of electrical engineering when a novel problem requires a novel solution outside of the bounds of the existing body of knowledge of the electrical engineering field. So maybe it is sufficient to say that an Engineer has enough mastery of the field they work in that they can contribute back to their field with novel solutions from outside of the discipline as it exists in that moment.

So are software engineers, you know, engineers? In most engineering disciplines, proper testing and certification is required by state and national boards in order to properly claim that one is an engineer. This type of certification does not exist yet. IEEE offered a Certified Software Development Professional program at one time, but that was discontinued in 2014. Instead, they now offer certifications in multiple areas of software development, with the reasoning seeming to be that software development covers too broad of a spectrum of software creation at the moment to be grouped into one certification.

So at present, it doesn't seem that there are any widely recognized certifications that provide a definitive "software engineer" title. However, that doesn't mean that there are not many of us today who are in effect practicing the same disciplines as other engineers; it just may be that there is not yet enough agreed upon material for there to be a known, written determination of what makes the software engineering discipline.

So what?

I do predict that one day - perhaps 1500 years from now, but hopefully not that long - the title of Software Engineer will be a professional distinction that will require full testing and certification. At the end of the day, does any of this matter? If the rest of the industry is following the title of "Software Engineer", then there doesn't seem to be any good reason to be apprehensive to the usage of the title of Software Engineer. However, I think after taking all of the above into account, we should feel encouraged and motivated to continue growing our practice, and contributing as much as we can to the development of software engineering as a discipline!

Note posted on Thursday, May 19, 2016 12:31 AM CDT - link

lazy-j: Lazy Java initialization library

Coming from the C# world, while working on audiocanoe I've often had the overwhelming desire to use something similar to the Lazy class that is in the Standard libs for .Net. Using it, you can easily initialize any object lazily without needing to implement your own double-check locked lazy initialization code.

So in a hasty moment, I wrote a library called lazy-j which supposedly guarantees your object will lazily be created the first time it is requested, using the supplied initialization function. It should also be thread-safe. It is also EXCEEDINGLY simple, here's the source:

package com.vedsoft.lazyj;

 * Created by david on 11/28/15.
public abstract class Lazy<T> {

    private T object;

    public boolean isInitialized() {
        return object != null;

    public T getObject() {
        return isInitialized() ? object : getValueSynchronized();

    private synchronized T getValueSynchronized() {
        if (!isInitialized())
            object = initialize();

        return object;

    protected abstract T initialize();

There's some nice things here; it uses Java's built in synchronized methods to do a double-checked lock for object initialization. It doesn't have all the niceties of Microsoft's library (such as different degrees of thread-safety), but it gets the job done nicely while being simple enough to understand at a glance.

Usage is also fairly simple. To instantiate a new object do something like below:

class MyClass {

    public static Lazy<MyCrazySingletonConfig> myCrazySingletonConfig = new Lazy<MyCrazySingletonConfig>() {
        protected MyCrazySingletonConfig initialize() {
            final MyCrazySingletonConfig newConfig = .....

            return newConfig;

class SomeOtherClassThatNeedsConfig {

    public void doingThingsWithConfig() {
        final String property = MyClass.myCrazySingletonConfig.getObject().getMyCrazyProperty();

You can view the source here!

Note posted on Tuesday, January 5, 2016 11:04 PM CST - link

Sync your media to your phone with Audiocanoe

Sometimes, Santa wants to listen to his music in his sleigh but his connection is spotty

What can Santa do? He can sync his music to his phone with the new version of audiocanoe that is in testing! Following the gif below, Santa can easily sync his favorite playlists and within minutes have them on his phone for playback:

Syncing Playlist

To grab it, head over to the beta test site and opt-in to help test it out!

Happy holidays!

Note posted on Thursday, December 24, 2015 1:51 PM CST - link

New Materialish Look for audiocanoe

audiocanoe is seeing some updates coming up to match Google's new Material Design specs! Take a look below:

Browsing Library

Now Playing

Note posted on Tuesday, October 13, 2015 7:20 AM CDT - link

Use Git to Manage Your Blog History!

One of the major problems of rolling your own weblog is properly managing the history of your posts.

The aim of this post is to elucidate how one can easily manage blog history, using only Git.


The best known methods for managing history of text documents have always been terrible. Yes, I'm speaking of Wordpress, but also commercial solutions like SharePoint, or the version tracking that has been built into Microsoft Word for the longest time.

Here's a list of cons that I always think of when using these tools:

  1. Inconsistently track history
  2. Sometimes can leave comments, sometimes can't
  3. Usually diffing is either unavailable or is built using some proprietary/internal code that probably doesn't work well
  4. Obfuscated via dense database models, XML formats, and/or binary formats
  5. Difficult to use third party tools with them
  6. Content management systems, which is what all blog engines are, need security to manage the blog. These security systems usually come riddled with bugs and security flaws.

Along comes lowly git, the little DCVS tool that could, which fills in the above gaps nicely. Combine this with a nice text format such as markdown, and you've got yourself a nice, versioned, document management system.

However, it does come with its own set of cons:

  1. The git learning curve
  2. Git doesn't natively take post metadata
  3. Git is a version control system, and thus doesn't track file metadata either — so "true" file creation time, last modified time are not available
  4. Wrapping git commands up in your favorite server-side language can sometimes be tricky
  5. Versioning doesn't happen automatically, but rather on intentional commits

None of this is a show-stopper however. Yes, git is ridiculous to learn. Yes, you can't get "true" file creation time. But none of this certainly bothered me much.


This is how I did it with nodejs:

  1. Create a git repo (git init) where you want your posts to reside.
  2. Use a nice sane format to store metadata about your posts. I'd personally go with at least a JSON-like format. Mine looks like below:

     title: Use Git to Manage Your Blog History
     author: vedvick

    The --- signals to the parser that the metadata section is complete.

  3. Grab the posts from a configured or constant location. This is my highly sophisticated version:

     glob(path.join(notesConfig.path, '*.md'), function (err, files) { ... });

    Following a simple convention of prefixing filenames with the date the post is created, such as 20151006-use-git-to-manage-your-blog-history.md, the server can then easily and reproducibly sort the files by the created date.

  4. Parsing the notes has a little sophistication to it. Here's the code used on my server in full:

     var parseNote = function (file, callback) {
         parseNote.propMatch = /(^[a-zA-Z_]*)\:(.*)/;
         fs.readFile(file, 'utf8', function (err, data) {
             if (err) {
             var textLines = data.split('\n');
             var fileName = path.basename(file, '.md');
             var newNote = {
                 created: null,
                 pathYear: fileName.substring(0, 4),
                 pathMonth: fileName.substring(4, 6),
                 pathDay: fileName.substring(6, 8),
                 pathTitle: fileName.substring(9)
             var lineNumber = 0;
             for (var i = lineNumber; i < textLines.length; i++) {
                 lineNumber = i;
                 var line = textLines[i];
                 if (line.trim() === '---') break;
                 var matches = parseNote.propMatch.exec(line);
                 if (!matches) continue;
                 var propName = matches[1];
                 var value = matches[2].trim();
                 switch (propName) {
                     case 'created_gmt':
                         newNote.created = new Date(value);
                     case 'title':
                         newNote.title = value;
             newNote.text = textLines
                                 .slice(lineNumber + 1)
                                 // add back in the line returns
             if (newNote.created !== null) {
                 callback(null, newNote);
             if (!notesConfig.gitPath) {
                 newNote.created = new Date(newNote.pathYear, newNote.pathMonth, newNote.pathDay);
                 callback(null, newNote);
             exec('git -C "' + notesConfig.gitPath + '" log HEAD --format=%cD -- "' + file.replace(notesConfig.path + '/', '') + '" | tail -1',
                 function (error, stdout, stderr) {
                     if (error !== null) {
                     newNote.created = new Date(stdout);
                     callback(null, newNote);

    The neatest part here (and where git or some other version control system shines) is using it to determine the note's created date:

     exec('git -C "' + notesConfig.gitPath + '" log HEAD --format=%cD -- "' + file.replace(notesConfig.path + '/', '') + '" | tail -1',
          function (error, stdout, stderr) {
              if (error !== null) {
              newNote.created = new Date(stdout);
              callback(null, newNote);

    Note how this doesn't actually return the true "created" timestamp of the file, but it does, in my opinion, return a timestamp that is close enough.

  5. When drafting a new post, create a new branch so the draft can be worked on in isolation without affecting work on other posts (for example, I posted an entirely different post while drafting this one). For this I also follow another convention: post/<post-name-here>. Of course, the convention is optional but I think at the very least it encourages consistency.
  6. Finally, merge posts into master and push it to a web server. Then add a post-receive hook that checks out master to the location determined above: GIT_WORK_TREE=<note-location> git checkout -f

Note posted on Tuesday, October 6, 2015 7:21 AM CDT - link

Custom SQLite Access Difficulties in Android

It is surprisingly difficult to access the SQLite database that the Android API exposes in anyway that is not using Android's native APIs. There's certainly no native JDBI wrapper supplied.

This is unacceptable for me; mapping fields to database fields by hand is something that we have automated numerous times in the last couple decades. The Android API does not do this. SQLite Access from the Android library is like going back to the ADO.net ages, here's an example I found duckduckgo'ing the internet:

return database.query(DATABASE_TABLE,
  null, null, null, null, null);

If you're thinking the above SCREAMING_CAPS look like they may be string constants, you would not be mistaken. Yes, the bare Android SQLite library drops you back to mapping an object manually. Hello, 2000.

So where did I go from here? What I've been using for a while is OrmLite, and while that certainly works, it has some massive memory leak problems, which, for a library that touts itself as "Lite" (I mean, it's in the name), is not very "lite" :).

Next steps were to look for a library that just bridged the database result set to object mapping gap, which is all I really wanted; ideally, something like Dapper for Java. Once again, numerous tools exist in this space but none supported Android out of the box.

DbUtils from the Apache foundation looks promising, but it once again needs JDBI drivers. Perhaps with SQLDroid I will achieve what I've always wanted!

Note posted on Wednesday, September 23, 2015 7:11 AM CDT - link

Nifty SQL

It's a very rare thing when neat code is also useful; in my experience "neat" or "clever" code usually also comes with some major caveats: readability and maintainability, and usually arguable usefulness being the major ones.

However, I think the self-join solution to the "greatest-n-per-group" problem might be one of the exceptions to that rule. While it certainly suffers from a lack of readability (at least personally, it requires a few mental acrobatics to comprehend), it makes up for it in its usefulness and execution speed; since it generally can work on indexes, it can avoid expensive index and/or table scans!

Really, quite a clever solution. I wonder if there's a mathematical analog to this solution?

Note posted on Saturday, August 15, 2015 8:26 AM CDT - link

When the Boss comes on, it's gonna be good

The Boss When the Boss comes on, it's gonna be good

Note posted on Saturday, August 8, 2015 4:06 PM CDT - link

How I Built My New Site

Long ago, shortly after I had started hearing about React on Hacker News, I came across a post that suggested that it was fairly trivial to build a blog (or any website in general) with ReactJS and NodeJS, and that you could not only have dynamic routing and client-side rendering, but also render views statically server-side.


For years, I had been hearing the outcry about Wordpress (usually related to it's unnecessary complexities and security faults), and had for the most part, agreed with them. So, for a while I fiddled around with NodeJS and ReactJS for a few months and finally reached a breakthrough point where I really felt I had the knowledge to build this site by hand, and hopefully, get a good introduction to this new ReactJS thing, and this old (but totally new to me) thing called NodeJS.

The stack I chose is fairly standard:

  • expressjs for the routing
  • express-react-views for displaying statically rendered ReactJS views
  • ReactJS for the front-end views
  • GulpJS for build and deployment

To reduce processing time in my release environment, most of the pages are rendered and built once in my GulpJS deploy task to an HTML file using ReactJS's React.renderToStaticMarkup method (this avoids a render method on every request, and allows browsers to cache more). This page is the exception, of course.


Building the site was just a develop/refresh cycle, with what I would assume is fairly vanilla ReactJS, so I won't bore you with the details. GulpJS, along with a watch task, provided extroardinary amounts of help here.


Once development was finished and it had a style that I enjoyed (simple does it!), I began focusing on deployment. I currently use nearlyfreespeech as my host provider, and I decided to stay with them if I could. Luckily, nearlyfreespeech hasn't stood still since I started hosting with them in 2012, and they have since enabled the usage of long-running processes, and proxies to connect them to. Of course they would also have the most recent version of NodeJS and NPM installed.

The downside to nearlyfreespeech's current pricing model with continuous running processes is that Node can tend to be fairly resource heavy.

For deployment, I fell back on GulpJS again. Unfortunately, there was not any first-hand support for SSH in GulpJS, so I used a plug-in called, aptly enough, gulp-ssh (notice the pattern here?). This plug-in competently handled connecting to SSH, however it was fairly limited in how it sent files - it only allowed files to be sent one at a time.

What I wanted was an SSH version of gulp.dest. I ended up adding a prototype method to gulp-ssh that did exactly that, called using gulpSsh.dest. I still have my doubts as to whether that functionality should exist independently in a separate plug-in, perhaps called gulp-ssh-dest, or something similar, operating in the spirit of Gulp. For now I will leave it as is, and perhaps come back to it later if I decide to make a separate plug-in which just covers that functionality.

You can view the source for the site on my github. My updates to the gulp-ssh plug-in can be reviewed and improved as well.

Note posted on Sunday, July 26, 2015 8:42 PM CDT - link