A Client Project, Two Years Later

Derek Prior

Throughout most of 2013, thoughtbot worked with T1D Exchange to develop the Glu patient and caregiver community site. Glu gives people living with type 1 diabetes (T1D) and their supporters a place to share their experiences, find support, participate in research surveys, and learn about the clinical trials. Glu not only publishes articles about life with type 1 diabetes, but also shares the results of its research with the community, showing how participating in real-world research can make a difference.

At its height, thoughtbot had a team of three developers, one designer, and a design apprentice working on Glu. Our involvement with the project wound down in late 2013 and T1D Exchange eventually hired their own developer, Daniel Nolan, in 2014.

I had an opportunity to meet Daniel at RailsConf this past spring. As consulting developers we don’t often get to follow up with developers at our clients, so we thought we would exchange emails about his experience maintaining, improving, and growing the application for the past year and a half.

On Working For T1D Exchange

Derek: You and I never overlapped on the project – in fact, no thoughtbot developers overlapped with you on the project. How did you come to work for T1D Exchange?

Daniel: I started working for T1D Exchange at the end of April 2014. I was contacted by a recruiter who arranged for me to meet with Brian Becker, Senior Product Manager for T1D Exchange. Brian is the one who originally sought out thoughtbot to revamp Glu. We met for lunch and he explained everything about T1D Exchange. The opportunity to work for a non-profit while making a real difference in people’s lives sounded rewarding and exciting. He explained that thoughtbot had built the platform and T1D Exchange was looking for a full time developer to maintain and add new features to the platform.

The fact that thoughtbot had built the platform sparked my interest in the position even more. Later that week they made me an offer to come work with them and I accepted.

On Following Sandi Metz’ Rules

Derek: We tried to follow Sandi Metz’ rules as closely as we could to see what kind of project it would lead to. Was this something you picked up on when you joined? What did you make of it when you were coming up to speed with the code base?

Daniel: When I was initially introduced to the code I read through the technical standards wiki page for the project on GitHub which outlined Sandi Metz’ rules and the thoughtbot style guide. I’ll admit it was little overwhelming at first. I had watched a talk Sandi gave about the rules but I had never seen them followed through an entire project.

At first, I felt the amount of objects introduced by following the rules was overkill. Once I began adding new features and extending existing ones, it became clear that the number of objects introduced was justified. When methods are no longer than five lines and classes are less than 100 lines, it makes the code much easier to understand and follow, in my opinion. I especially like the use of facades or presenters in controller actions allowing for a single object to be passed into the view.

I have been adding and extending features for over a year now and have continued to follow her rules. It takes discipline, but the payoff is definitely worth it. I have done some green field projects for T1D Exchange and have chosen to follow her rules on those projects as well.

Derek: We also tried to follow the five-line rule in most of our specs and in hindsight it’s probably my least favorite application of the rules. We used a lot of page objects in the feature specs to facilitate that and it made for readable tests, but took a lot of effort. Looking back, I wonder if it also introduced too much indirection?

I also think our adherence to the five-line rule and the four-phase test meant that we had individual feature specs for things that may have been simpler, and definitely would have been faster, to test in a single spec. Did you have any difficulties with that or with the tests in general?

Daniel: I don’t really think it’s practical to follow the five-line rule in specs. I feel it’s okay for specs to be a little more verbose since they are describing the application code. Although page objects keep the specs clean, I’m not sure they are worth the effort just to keep the spec under five lines. Page objects make it harder to immediately recognize what the spec is doing without going and looking at the methods defined on the object, which in some cases is in another file.

Overall I didn’t really have any difficulties with the tests. I think multiple expectations in a single feature spec makes a lot of sense to help with the speed of the test suite, and that’s difficult to do keeping the spec under 5 lines.

On Fast Tests

Derek: One thing I remember taking pride in was a fast test suite. For a long time it finished in under 10 seconds despite testing a good deal of functionality. I think eventually that barrier was blown away, but the tests were still reasonably fast. What’s your experience been there?

Daniel: I wish the test suite ran in 10 seconds! It was right around 50 seconds on my MacBook Pro when I started working on it and it takes just over a minute now. I’ve had to add some feature specs to the business critical pieces of the admin interface, and with new features I have added we are around 990 specs. I think it runs in about 40 seconds on Solano CI.

On the Trade-Offs of External Services

Derek: In the interest of focusing on accomplishable goals in our timeline, we ended up using external services for a couple of integral site functions. For instance, building a CMS wasn’t core to the application but there are blog entries T1D Exchange staff post to the site that must integrate with other native content. For that, Brian set up a WordPress site for ease of authoring and we synced the content into a posts table on a schedule.

The site also has a survey feature which allows users to participate in research surveys that had some fairly advanced requirements. We needed to be able to support branching, different question types, provide an interface for writing surveys, and provide some rudimentary reporting. That aspect alone would have been an entire application or an entire company, so we turned to Wufoo. We had to build some integration to track users anonymously across the two sites and to ensure users had consented to participate in the surveys but we didn’t have to build a form builder.

How have these integrations held up over time? Are their trade-offs still worthwhile?

Daniel: The WordPress site is still used by staff to publish articles. It works quite well as I haven’t had to touch any of that code since I have been on the project. The Wufoo integration has held up very well also and has been used for many great surveys. There have been a couple of instances where we didn’t receive the webhook post from Wufoo when a survey was completed by a user. I had to write a task to keep the Wufoo survey completions in sync with our app database. Recently, our research team came to us with some requests for survey features which Wufoo doesn’t support. As a result, I have been working on integrating an additional survey service called Survey Gizmo.

On Building a Platform

Derek: Throughout the project there was this idea that what we were really building was a platform. Glu was the first incarnation, but there was a hope this model would work for other groups, so we tried to keep white-labeling in mind. That was difficult at times without concrete use cases, but it did lead to decisions that were beneficial on a broader level, such as using internationalization throughout the entire application. Has there been any movement on bringing the community platform to more groups?

Daniel: It is now a platform and we renamed it “Community Application for Research Engagement” (C.A.R.E). We are getting the app ready to be used for a second disease group now.

The main challenges so far have been figuring out the best way to be able to have changes like colors, logos, and copy. We have forked the project and will continue to add features on the original code base and have that be the upstream for the fork. So far this seems pretty manageable. The groundwork was laid very well by your team.

On Performance

Derek: How has site performance been? The dashboard page in particular has a lot of different types of data on it. I recall doing our best to sprinkle in helpful layers of Russian doll caching to speed things along. I distinctly remember battling over properly caching things like the avatar of the most recent user to comment on each status on the dashboard.

Daniel: Site performance has been really good, and has only gotten better with newer versions of Ruby and Rails. We are on Rails 4.2 and Ruby 2.2 now, and the site is nice and snappy. The caching for the dashboard has definitely been a pain point for me. In the dashboard activity feed, there are “report as abusive” links shown to all users and “promote/demote” links shown to admin users. I moved the “report as abusive” and “promote/demote” links to a single drop-down menu for each activity feed item. It was a headache when it came to caching and making sure that “promote/demote” links weren’t shown to a non-admin users accidentally. Cache invalidation is definitely one of the two hardest things in programming.

On Adding New Features

Derek: Have you found the system easy to understand as a whole? How about the individual parts? As features have been introduced and changed, have you found the system amenable to change? Was the test coverage sufficient to be able to confidently make necessary changes?

Daniel: As a whole, the system has been fairly easy to understand. In the project within the app directory there are builders, delegates, facades, presenters, and processes. All of the classes in those directories are just POROs, but I’m typically used to seeing them in a services or lib directory. That took some getting used to, and it seems that some classes are mis-categorized.

Adding and extending features has been pretty simple. The small classes and abstractions make everything easy to understand. Making changes and additions is fairly painless. I don’t think I have had any instances where adding something required a huge refactoring to allow it. The test coverage is great, 87% according to Code Climate, and I always feel pretty confident making changes knowing there is good coverage.

I wish there were more tests for the admin features. I had an instance where just a few high level tests would have prevented a significant bug in the admin area.

On What Could Have Gone Smoother

Derek: What do you wish we had done differently – system wide, on an individual feature level or both? What has caused you the most pain or annoyance?

Daniel: I wish that the five-line rules was not followed in specs. I would have rather had move verbose specs with multiple expectations in the feature specs to help keep the test suite fast.

My biggest annoyance is image uploads, but that’s really more a Heroku issue. We were having timeout issues with users uploading images from their phones and slow connections. Images are now uploaded directly to Amazon S3 from the client using JavaScript. A callback from the direct upload triggers the app to download the image from S3 for post processing with Paperclip. The processed images are then uploaded back to S3 and the original upload is deleted. That’s a pretty convoluted process to upload a user’s newest cat photo to the app if you ask me.

Wrapping Up

Daniel: Overall, I have to say the code has been a real joy to work on and I have learned so much from it.

Derek: Thanks for taking the time to catch up with us. I’m glad to hear the project is progressing well. All of us that worked on it here really enjoyed it and we reminisce about it often.

About T1D Exchange

If you or someone you love lives with T1D, check out T1D Exchange and the Glu community. If, you’d like to work with Daniel on the future of Glu and C.A.R.E, check out the Junior Software Developer position they have open.