Creating practically private methods in mixed-in Ruby modules

Module injection in Ruby is cool. It allows you to flexibly compose an object’s behavior (those it’s not composition; it’s multiple inheritance), while still leveraging the data/code coupling power of objects. However, it comes with its own set of complications.

A particular module might want to reduce its namespace footprint by exposing a small interface and hiding implementation functions. Take a look at the code in this gist. Both modules define an aptly-named private_method, but once they’re both included in the class, the second private method overwrites the first.

Ruby’s private keyword does not do what I want here, as methods defined within it share a namespace with all other included methods. You do not want a module’s private methods to have to compete for namespace, as no other parts of your code should even know they exist! This is a strong dependency.

There is an easy, common solution, but I have to admit that it feels hacky, and is certainly not semantic. By which I mean, this solution would never occur to me. But here: effectively private methods can be hidden within classes or modules defined within your original module, like in this gist. The idea is summarized as follows:

module MyFirstModule
  def first_public_method(my_string)
    return "#{my_string} #{PrivateMethods.private_method}"

  module PrivateMethods
    def self.private_method

This module can be included in your model classes, and Bob’s your uncle! The module exposes a public interface which makes use of private methods which truly do not effect anything else. It’s not too complicated, but don’t you agree that it’s a bit gross?

This doesn’t solve every problem of module injection, but it allows you to reduce surface area and make necessary dependencies more obvious. I’m currently trying to figure out how to prevent modules from being so easily able to call and depend on each other in inexplicit, unobvious ways. Like in this gist.

Leave a comment

Filed under Uncategorized

RESTful API design and implementation

Recently, while building the forest’s API and finding myself running into problems I’d never imagined would need to be solved, I realized that I know very little API design.

This implies I’ve learned some, and simply have a better view of the field than I did. This article (which concerns both design and implementation) discusses the things I have learned and contended with.

I’ve been reading, and studying a couple different APIs (e.g. GitHub, which is my favorite) to see how it’s done.



APIs need to be versioned so that the developer can easily release new work. The two major schemes to expose different API versions: through request headers, and through the URI.

The header solution is the “most correct”, partially because it keeps URIs as an explicit and clean reference to a particular resource. Properly, the API version doesn’t have any bearing on the resource the client is trying to access, and thus doesn’t belong in the uniform resource identifier. That’s a sound argument.

However, several major players do use “api/v1/resource”, the improper and less technically RESTful method. And I’m doing it, too.

It’s easier for development. I don’t have to use curl or Postman to see what my responses look like. I can just visit localhost in browser. I also believe it’s a less opaque technique, and the only argument against it is an appeal to authority.

Here’s a good StackOverflow answer explaining why I’m wrong.

I agree that URIs should be permalinks, but a major API version change is likely going to introduce breaking changes. API consumers are going to have to change how they request the resource in any case. I’m much happier keeping the version totally in sight.

URIs dependent on params, not IDs

I want clear, beautiful URIs. Locations are best known by their Cartesian coordinates, not their IDs. Any Rails developer already knows that this is somewhat of a pain. Although custom routes are fairly trivial in Rails, these schemes can very quickly and easily get out of hand if not managed properly; such is Rails’ dependence on naming conventions.

I went through a couple thoughts on this, like for example /locations/x/1/y/2, but I eventually settled on a query string: locations?x=1&y=2

I found a couple of people discussing whether query strings are RESTful on StackOverflow. It would seem that there’s no reason for them not to be; REST merely asks that resources have unique endpoints.

Of course, dictating to Rails that I want every location’s URI to use the query string rather than an ID meant clobbering DMM’s opinions.

Here’s some of the weirder stuff I had to do to get this scheme to work:

JSON layouts

That bullshit concerning rails-api and JBuilder that I already wrote about.

It makes no sense that this isn’t built into rails-api. The main draw of Rails is that it’s opinionated, and thus does a bunch of heavy lifting for the developer at the cost of flexibility.

There’s a bunch of stupid rails-api stuff I’ve complained about on Twitter.


This is a large topic that dictates much of the forest’s design. It’s taken a lot of work, and I barely have it in place. HATEOAS is an aspect of REST that most APIs do not bother with; developers (not unreasonably) expect consumers to rely on separate documentation to get around. GitHub is my model API because they actually do implement it, and they implement it well.

I would love some back-end framework that makes this easier, like Rails makes every other common task easier. That’s a niche I would have expected rails-api to fill. Yes I’m bitter.

HATEOAS breaks down into a few sub-issues.

  1. Return relative or absolute URIs?
  2. Where should actions be in the response? All at the top level in one list or object, or some actions per object?
  3. What do action routes look like?

So, 1), I’m going with relative, rather than absolute, URIs. A couple of the reasons for this are, unfortunately, because Rails makes it easier to use relative URIs. URL helpers like url_for and [resource]_url return relative URIs by default. Additionally, RSpec makes it hard to get(absolute_url), so I’d have to add a bunch more boilerplate to the tests.

Here is a good discussion on paths on StackOverflow.

Github uses absolute URIs for action routes. They say it’s so the client doesn’t have to construct the URI themselves. Fair enough; it does take a little work for the client to construct URIs with the scheme I’m using. However, they would probably only need to implement that solution once in that project. I don’t expect it to be too onerous.

2) Right now, actions in the API response are presented like this:

  actions: {...},
  objects: [{
    kind: wolf,
    actions: {...}

Actions are stored in location.actionsand in location.objects[i].actions. They are organized according to what the object the action is on, but as a result are scattered throughout the JSON.

I could also have a big ol’ hash or list at the top level simply labelled actions which would contain every action possible at that moment. It would be very easy to find, and namespacing actions within could help organize them.

My main concern here isn’t whether one thing is more correct that the other. I’m struggling with what solution would be easiest for the API consumer.

In terms of development from the API maintainer’s perspective, the current setup is easiest. So that’s that. If it turns out to be awful from the consumer perspective, I’ll look into changing it.

3) And what should actions look like? Specifically, let’s say hello to a wolf. Is that:

  • /wolves/1/action/say-hello
  • /wolves/1/say-hello
  • /wolves/1?action=say-hello

I’m pretty sure I’m gonna go with the second option, although if the forest ever has nested resources, it could get to be a problem. The solution is to say “no nested resources!”, but that seems pretty limiting. We’ll see!


Designing and building this API has taught me so much that I didn’t know I didn’t know. It’s been so much fun! Because my collaborators (looks like Ryan‘s on board!) and I are basically the only people consuming the API, I have a pretty good idea of what the client’s needs are. Having to deeply consider both sides has given the process a lot of helpful direction.

Leave a comment

Filed under code, Uncategorized

What I learned from a React code challenge

I’m currently on hold with the Oregon Employment Department, and as a result don’t feel that I have the focus to design and implement some extra features. I’ll write a blog post instead!

An employer emailed me and asked me to complete a code challenge: create a simple to-do list in React. You can see the todo list I built here. I would have liked to style it a bit just to show off my design and CSS chops, but I focused on implementing features and ran out of time.

I’ve written todo lists in React before, but I always followed a tutorial. In this case, I kept honest and wrote everything from scratch. Without someone guiding me, it took longer than I expected, but I learned an awful lot more! I found myself really enjoying this challenge.

I probably made some beginner mistakes, and implemented some anti-patterns. If you notice any in my code, I’d love to know about them.


Step one was to find a React and Webpack boilerplate. I was googling around, and found plenty of super-complex, production-ready solutions. They transpiled ES6 or 7, and included hot module reloading, local dev servers, deployment scripts, and test helpers. I didn’t want any of those. I wanted something that would use Webpack to build all my JSX modules into one JS file that I could include in some basic index file I could open in my browser. So I Googled “react webpack barebones boilerplate”.

This very simple template is one of the first things to come up, which is surprising because it simply doesn’t work. Among other things, you need to include and use the React-dom module to render things to the DOM these days.

So I forked it, fixed it, and PR’d it. We’ll see if the maintainer accepts it, but now I could move forward with the challenge!

And like 70% of my time has been looking up and remembering syntax. I’ve written a fair amount of React and JSX, but I’ve written far more Ember and Angular. Generally, the “data down, actions up” philosophy makes plenty of sense. I think you can get a good sense of my workflow and thought process by looking at my commit history.

One of my first steps was realizing that React’s getInitialState() function unintuitively (to me) sets a component’s initial state.

Another important realization was that a component’s render() function is called every time the DOM is updated. For a while, I assumed it was called only once in a component’s life, but the fact that it is frequently called means that work done and defined within it is essentially dynamic.

And finally, I had to re-learn how to send actions up through the component hierarchy. I’m not using a Flux implementation here, so there’s no dispatcher involved. If an action at a low-level needs to change something higher-up, it needs to somehow invoke a function its parent passed down to it through props.

Instead of calling a prop function directly from the render, I found that delegating to a locally-defined dispatcher function (I’m not sure what the proper terminology is here) solved a lot of problems. That is to say, instead of this:

render: function() {<button onClick={this.props.addThingie}></button>}

do this:

render: function() {<button onClick={this.handleClick}></button>},
handleClick: function(e) {this.props.addThingie()}

Not only is this prettier, but it gives you more flexibility. Instead of the addThingie() function only getting access to the click event (and thus its target and its value), you can pass addThingie() anything you’d like. Possibly something from the child’s state, or something that is computed within handleClick(). You can also do more work within handleClick(), such event.preventDefault().

In my code sample, I haven’t gotten the time to implement todo editing yet, but I am now confident that I could do so! Before this code challenge, I wouldn’t have had that confidence. That’s pretty cool.

Leave a comment

Filed under Uncategorized

The Jacket.

This is just a little anecdote about a hackathon I went to in 2015. There isn’t some big declaration at the end, but if there is a thesis or moral, it’s this:

Fun is tremendously important to me.

And maybe:

Do scary things.

I signed up for a hardware hackathon (Hack to the Future, held by and the super-great Adam Benzion) with my friend Stephen a while ago, just after graduating from Dev Bootcamp. It was a super uncomfortable position for us: not only were super green as developers, being barely able to cobble together a crappy CRUD app, but we were (and are) web developers. We knew next to nothing about hardware. It was terrifying.

I distinctly remember the first morning of the two-day hackathon. We were supposed to mill about and socialize. Stephen and I were so intimidated that we almost left without participating. We managed to dare each other into sticking it out for at least a little while. We had paid for entry, after all.

I talked to a few people, and was continuously made aware of the fact that I was inexperienced, out of my element, and that all my hardware ideas were stupid. I floated the idea of building a telescoping hat that could telescope up and down, partially joking. People let me know it was a stupid and out-of-place idea. I agreed and kept quiet, fuming.

Eventually, it came time for people to announce project ideas out loud; if someone was interested in your idea, they would come over to you and teams would form sort of organically. As it came time for me to yell out, I quickly edited my original thought:


I got laughs and applause, and then I got a team: Stephen; a computer science student (Martha) and an engineering student (Stephanie) from SFSU; and a couple pals (Ken and Tinic) with deep hardware prototyping experience (e.g. laser cutting and 3D modelling and printing). After that, I actually had to turn a couple people away. People were excited to build this stupid project of mine. I was gushing, proud, and terrified. Then we got to work.

Marty McSleeves was born.

Vincent wearing the jacket while Stephanie works on wiring it together

A lot of the time, I served as team mannequin.

It was magic. One of the team members donated a jacket. Martha and I did some sewing. We modelled and printed the spools and housing, spooled fishing line on them, sewed them into the jacket’s shoulders. I spent all night at Noisebridge in the Mission sewing washers into one of the sleeves! The servo motors (housed in components built by Ken and Tinic) were controlled by a Spark core, which exposes its internal methods as an easy-to-access web API. Those methods were written by Stephanie and Martha. Stephen wrote a web client using Node and Express, we deployed it on Azure, and Bob’s your uncle. More than once the spools got tangled and jammed, and some API calls were behaving erratically, but we had done it:

A jacket whose sleeves would go up and down based on a button you click in your browser, and I got to call myself “team lead” the whole time! And we made it in 48 hours! I am still stupidly proud of it.

You can see the Hackster project page here.

the 3D printed components in the jacket, and the servo motors they housed.

Servos with 3D-printed spools and their 3D-printed housing, which would later be sewn into the shoulders of the jacket.

Here’s a video of this janky, hacked-together thing in action.

I made friends, I had fun, and I learned a lot by doing something I was totally uncomfortable with. I think this was true for my teammates as well. It was a huge personal win.

Then came the judging.

Which, I dunno. I have very mixed feelings about judging in hackathons. Unless there’s a specific need for it, I tend to favor the approach of “the team that had the most fun won!”. But we had judges, and they had criteria.

We set up our table like it was a science fair, and I delivered our spiel. As it happens, when the judges saw our results, the jacket’s spools were jammed and we couldn’t demo. I hardly saw that as an issue. We showed them a video instead.

But the judges were obviously disappointed. They followed up by asking me what sort of market I foresaw for this kind of product.

I said this was for fun. Obviously no one would or should try to market this.

We got an honorable mention.

Almost every other team got a special sponsor prize with an actual reward. Almost every team also tried to create something useful or marketable. I somewhat bitterly suspect our mention was a consolation prize.

There is nothing wrong with creating a product that helps people or is profitable. Indeed, those are good things that we should usually strive for.

But this was a hackathon. I approached as a space in which to create without judgement, cut off from the demands of Silicon Valley. I was bummed! Oh well. By my criteria, we won by miles.

There are other hackathons out there with a philosophy closer to mine, e.g. the beautiful Stupid Shit No One Needs and Terrible Ideas “Hackathon”.

The Snackathon

And in a couple weeks, I’ll be self-hosting what I’m calling the Snackathon, a tiny little hackathon among friends with a similar idea. Maybe every single project will win an esoteric and customized prize. Maybe they will be snacks.

The primary goal is for it to be fun. Let me know if you’re interested!

I’m so stoked.

Leave a comment

Filed under Uncategorized

A reintroduction to TDD

So I’ve been writing a forest.

And I’ve been writing tests for it.

It’s been a lot of fun, and I’ve gotten a second wind on the project in the past couple weeks. I’ve written a bunch of code!

However, a lot of features have yet to be implemented. The code I’ve been writing has largely been RSpec, which you can see in this PR. As of right now, most of those tests fail, which isn’t a mistake.

I’ve been TDDing the forest a lot. Not everything, but a lot of things. Professionally, I’ve always regarded TDD like veganism: yes, you can make a lot of great arguments in its favor that I cannot honestly counter, but I just can’t imagine myself actually following through with it. So while I’ve historically written tests, they usually don’t come first.

Good arguments against TDD exist, and I certainly don’t think it’s the be-all and end-all. Some people are really off-puttingly dogmatic about it. But I’ve discovered, personally and for the first time, reasons to do it.

  1. It’s meditative. I’m not going to substantiate this because it’s very subjective, but there. It feels good.
  2. Non-passing tests act as a very difficult-to-ignore to-do list. I have a bunch of to-do lists devoted to the forest, including Trello, post-it notes, and #TODO and #FIXME comments scattered throughout my code (which is great for indelibly affixing a task to a particular line), but nothing encourages me to act more than a yellow or red dot in a sea of green.
  3. I’m not on a timeline. TDD does take longer up front, at least in my experience. Maybe one day I’ll get good enough at it that that won’t take any longer than writing the other way around. Of course, professional projects usually are on a timeline, so this reason might not convince your boss.
  4. It makes my coding intentional. Every time I write a method, it is to get a particular behavior. If I start building a workaround that goes off into some space not covered by the spec, I feel it immediately.
  5. It makes my code prettier. This is also subjective, but I think it does.

I’ve definitely complained about TDD before, because if you are in a time crunch it can feel very, very slow. “What do you mean I have to write code before I can write my code? This is bull!”. But this forest is a nice place. There’s time to do things right here.

Leave a comment

Filed under code, Uncategorized