Behind the scenes of product and engineering at Quri.

Introducing Titanium

We wrote Titanium in order to respond to the following user story:

As an EasyShift user, I want all images inside of a job / survey to be tappable, zoomable, etc., so that I can get a better look at them.

Strictly speaking, we could have dealt with this problem by using a good old UIScrollView and a standard modal transition. However, we felt like this deserved a little bit more love.

Preview

Custom transition

Apple provides an easy way to customise transitions between view controllers: the UIViewControllerAnimatedTransitioning protocol. It lets you manually specify all the animations for presenting and dismissing view controllers.

Scale and translate

In this case, I wanted to recreate an animation like the one in the Photos app, where a thumbnail preview enlarges to fill the entire screen. In order to achieve this, the logic was to instantiate the full-screen image view, apply a scale + translation transform to make it the size and position of the thumbnail, then revert the transform to CGAffineTransformIdentity while animating. Easy enough.

Mask

But we’re not done yet. Just like in the Photos app, the aspect ratio of the thumbnail is independent from that of the image itself. What’s more, in the Photos app the thumbnails are always square, whereas our thumbnails can have any arbitrary aspect ratio. In a plain UIImageView (or any other UIView subclass, for that matter), this is easily achieved like so:

1
[imageView setContentMode:UIViewContentModeScaleAspectFit];

However, we need an animated transition between the cropped image of the thumbnail and the full view. The solution here is to use the mask property of CALayer like so:

1
2
3
CALayer *mask = [CALayer layer];
// Set up the mask's bounds to correspond with the visible part of the thumbnail.
[imageView.layer setMask:mask];

Corner radius

We also wanted to enable users of Titanium to use thumbnails with rounded corners. That meant we had to use the cornerRadius property on our full screen view. However, we couldn’t just read the value from the thumbnail view, apply it to our full screen view and call it a day. Because our view was going to get scaled, we had to multiply the value by the inverse of the scale factor before applying it.

Animating

The major drawback of using CALayer properties is that, unlike CGAffineTransforms, they cannot be animated using UIView animation blocks. Instead, I had to create a CABasicAnimation for each property I wanted to animated (mask and cornerRadius). Here’s a quick example:

1
2
3
4
5
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"bounds.size.width"];
animation.duration = 1.0;
animation.fromValue = @(200.0);
animation.toValue = @(320.0);
[mask addAnimation:animation forKey:@"bounds"];

Full screen image view

The easiest, most straightforward way of providing scrolling and zooming capabilities to a view with arbitrarily-sized content is to use a UIScrollView. In practice however, it proved too challenging to integrate into the custom animations, and an alternative solution was found.

Gesture recognizers

The full screen view is composed of a black background view and an image view. Interaction is achieved using UIGestureRecognizers that detect four different types of gestures: * pan * pinch * tap * double tap

The general structure of the code was derived from the Touches GestureRecognizers sample project (available here).

Pan & pinch

These two gesture recognizers work in tandem to provide direct manipulation of the image view. The UIPanGestureRecognizer affects the center property of the image view, while the UIPinchGestureRecognizer applies a CGAffineTransformScale to it.

Tap & double tap

In addition to the pinching and panning, two instances of UITapGestureRecognizer handle single- and double-taps. A single tap will revert to the original zoom level and dismiss the view, and a double tap will zoom to the maximum zoom level.

In order for these to work alongside each other, the gestureRecognizer:shouldRequireFailureOfGestureRecognizer: delegate method is implemented to return YES if the two gesture recognizers in question are the single-tap and the double-tap, respectively. One drawback of this solution is that it introduces a slight delay in the detection of a single-tap, while the system gives the user a chance to perform a double-tap.

The Value of Speed in Market Research

At a recent SF American Marketing Association Meeting I participated in a panel on Market research trends and was asked what recent trend had changed research the most. And while I think the questioner was looking for a technology answer my answer to the question was the trend towards speed- where a fast answer is better than a precise answer or an answer derived from perfect methodology. I see this both in our client-facing product and in our development environment.

At Quri we collect high quality data for customers on the performance of their products at retail. We also collect data to improve our product development activities. In this blog post, I will look at speed through these two lenses.

Speed in research has its yin and yang. Its one thing to collect data quickly, its useless though unless you can do something with it. Technology enablement allows for the collection of data at lower costs and faster turns. It also allows though for faster use of the data. In the case of retail execution for example we can instantly notify field forces of problem conditions so they can take action. In the case of product development a modern software development organization is iterating and deploying constantly and can absorb and act on new data within hours.

Enablement of Speed For Quri, the creation of a crowdsourced labor force that can perform measurements over vast geographies is enabled by the ability to utilize the camera, computer and transmission device (or otherwise known as a smart phone). Recruiting a workforce is also made easier through the use of app store recruitment and mobile and online acquisition methods. We can recruit users to do thousands of shifts in seconds, and get answers back in hours. Companies like Quri, Usamp, Shopkick are all benefiting from the ability to produce data faster and at a lower cost. In an industry where surveys would take weeks to launch and 30 days or more to field this is truly revolutionary.

In product development we also see a trend to lower cost and faster research methods. Whether these methods are pop up quizzes by forsee and survey monkey, measurement tools like google analytics and mixpanel or in experience surveys like Qualaroo (formerly kiss insights) provides, the intention is to gather data quickly. In addition to those tools we use “anywhere” shifts which a shifter can do anywhere (I know the naming is genius). These surveys allow us to ask our shifters questions about how they like to work, what new features they want or who they are. We get answers from thousands of shifters within a day and often within hours. We debate in the morning and we have answers in the afternoon. Our community forums also provides us feedback instantly on issues we ask about or developing trends they see.

So what is the benefit of all this speed?

The initial benefits are:

  • Shorten debate: We see this in both our client conversations and in product development. A brands sales force will have one opinion of execution and a broker another. This debate can range for months and data collected infrequently over long periods of time confuse the data even more. But data collected in the days of a new product or promotion launch, or the day before a sales meeting provides immediate unequivocal data as to whether something is on or off track. In the case of product development knowing which direction to head in it isn’t always obvious but getting immediate user feedback and shortening debate reduces time to market and doesn’t waste engineering resources building the wrong thing.

  • Correct: Fast feedback allows for immediate correction. At retail that means the difference between selling lots of ice cream on July 4th or missing that key promotion day. In product development, that means catching issues that impact users quickly reducing churn or customer support costs.

Longer term benefits:

  • More iterations Speed and low cost allows for more iterations. A simple analogy is how a tank battle is fought. It’s not ready aim fire. Tanks fight in a fire, measure, correct, fire, measure, correct sequence. These small rapid corrections allow for a small course corrections that proceed in the right direction. In the case of retail this allows execution managers to try different methods for improving execution and improving the execution from promotion to promotion. In the case of product development, the same is true. The risk of being wrong goes down as fast feedback allows you learn quickly and build on your learnings. To leverage this you have to have the development process wired correctly but most modern SaaS firms are moving in that direction.

  • Exposure of systemic issues More iterations also means there is more frequency of measurement and more data. Having a richer data set across long time periods that is composed of a number of short measurements allows for analysis to gather systemic insights. In retail for example we see certain stores, distribution channels methods and organizations that have recurring issues. These systemic issues once addressed can bring much bigger gains than short-term corrections. In the case of product development systemic issues ranging from cohort behavior, system improvement or degradation, scale issues, seasonality are all more clearly seen with more frequent data.

  • Prediction Faster and lower cost data collection increases predictive capabilities. In retail that means execution plans can be developed more reliably predicting future performance. A chain that has historically not demonstrated the ability to execute well may not be the best candidate for a future promotion. For product development

The Need for Speed∗

Once your organization gets used to getting research results back in days vs. months, it will never be the same. Speed is a mindset change where initially people have a hard time grasping what they would do with data that comes in quickly but that changes to an addiction. We see the migration in clients who become quickly dissatisfied with slower methods and vendors. When you can have data in days, months are a lifetime. When the answer can come tomorrow why would you wait?

∗ You can’t do a post on speed and not use that quote

Better iOS/Mac Models Part 1

One thing that nearly every app that communicates with external services does is parse JSON and convert it into usable model objects. I’ve seen this handled numerous ways, the most common being some sort of for loop on an NSDictionary’s keys, looking for specific strings.

1
2
3
4
5
for (NSString *key in dictionary) {
  if ([key isEqualToString:kSomeJsonKey]) {
      self.someProperty = dictionary[key];
  } else if([key isEqualToString:kSomeOtherJsonKey]) . . . and so on
}

Not only is this a major pain to read, but it makes updating models tedious and error prone. After spending years populating my models this way, I set out to find something better. In post one of this three-post series I’ll introduce the mechanisms we use to automatically marshal JSON API responses to models. In part two I’ll introduce additional automatic NSCoding and NSCopying support and additional utility methods. Part three is the ultimate open sourcing of our base model class.

Core Functionality / Requirements

Before jumping into the code I will explain some of the requirements we have for our model objects. First, assume that all of our models will inherit from a single base class that handles most of the functionality.

1) Automatic marshaling of model(s) from an NSDictionary or NSArray representation

Passing an NSDictionary or NSArray to a single method will automatically generate a complete object graph based, complete with nested models and collections of models.

2) Automatic NSCoding & NSCopying support

All models, by default, support NSCoding & NSCopying so the app can easily persist them to disk and make deep copies. Very little, if any, scaffolding code should be required to support this.

3) Automatic conversion of NSString’s to types like NSDate, NSURL, etc.

Models often times contain more complex types, like NSDates, NSURLs and even things like (UI|NS)Images. Conversion to these types needs to be easy to setup and seamless.

ESModel

Now we’ll jump right into the base model class (paired down for part one) with the majority of our functionality and an example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
#import <Foundation/Foundation.h>

typedef id (^ValueConverter)(id obj);

@interface ESModel : NSObject <NSCopying, NSCoding>

+ (NSDictionary *)valueMap;

+ (instancetype)fromDictionary:(NSDictionary *)d;

+ (NSMutableArray *)fromArray:(NSArray *)a;

+ (NSDictionary *)valueConverters;

+ (ValueConverter)modelValueConverter;

+ (ValueConverter)arrayOfModelsValueConverter;

@implementation ESModel

+ (NSDictionary *)valueMap {
    @throw [NSException exceptionWithName:@"Abstract Method Access" reason:@"This method needs to be overridden by any inheriting class." userInfo:nil];
}

+ (instancetype)fromDictionary:(NSDictionary *)d {
    id model = [[[self class] alloc] init];

    NSDictionary *valueMap = [self valueMap];
    NSDictionary *valueConverters = [self valueConverters];

    for (NSString *key in valueMap) {
        id value = d[key];
        if (value) {

            // skip NSNulls
            if (value == [NSNull null]) continue;

            // convert if there is a converter
            if (valueConverters[key]) {
                ValueConverter valueConverter = valueConverters[key];
                value = valueConverter(value);
            }

            // finally set the value
            [model setValue:value forKey:valueMap[key]];
        }
    }

    return model;
}

+ (NSMutableArray *)fromArray:(NSArray *)a {
    NSMutableArray *models = [NSMutableArray arrayWithCapacity:[a count]];
    for (NSDictionary *d in a) {
        id obj = [[self class] fromDictionary:d];
        [models addObject:obj];
    }

    return models;
}

+ (NSDictionary *)valueConverters {
    return nil;
}

+ (ValueConverter)modelValueConverter {
    return [^id (id obj) {
        NSDictionary *d = (NSDictionary *)obj;
        return [[self class] fromDictionary:d];
    } copy];
}

+ (ValueConverter)arrayOfModelsValueConverter {
    return [^id (id obj) {
        NSArray *a = (NSArray *)obj;
        NSMutableArray *newArray = [NSMutableArray arrayWithCapacity:[a count]];

        for (NSDictionary *d in a) {
            [newArray addObject:[[self class] fromDictionary:d]];
        }

        return newArray;
    } copy];
}


/**

Ex json:

{
  "key_1":"foo",
  "key_2":"bar",
  "model_2": { "key_3":"bat" },
  "array_of_model_2s": [ { "key_3":"something" }, { "key_3":"else" } ]
}

*/

@interface MyModel : ESModel
  
@property (nonatomic, strong) NSString *modelProperty1;
@property (nonatomic, strong) NSString *modelProperty2;
@property (nonatomic, strong) ModelTwo *model2; 
@property (nonatomic, strong) NSArray *model2s;

@end

@implementation MyModel
  
+ (NSDictionary *)valueMap {
  return @{    
      @"key_1":@"_modelProperty1",
      @"key_2":@"_modelProperty2",
      @"model_2":@"_model2",
      @"array_of_model_2s":@"_model2s"
  }
}
  
+ (NSDictionary *)valueConverters {
  return @{
      @"model_2":[ModelTwo valueConverter],
      @"array_of_model_2s":[ModelTwo arrayOfModelsValueConverter]
  }
}
  
@end   

Woah, that’s a lot to digest. Let’s go through it and use MyModel to illustrate ESModel works.

First up is + (NSDictionary *)valueMap;. This method must be implemented by any sub-classes and is the basis for the conversion of NSDictionary key-value pairs to model properties. If you look at MyModel you can see it simply returns a dictionary with the keys corresponding to what would typically be JSON keys and the values corresponding to the model’s ivars.

The next method is a biggie and provides the main functionality we desire. As its name implies, fromDictionary handles converting a dictionary into a model class. Let’s go through it in chunks so it’s easier to digest.

1
2
3
4
5
+ (instancetype)fromDictionary:(NSDictionary *)d {
    id model = [[[self class] alloc] init];

    NSDictionary *valueMap = [self valueMap];
    NSDictionary *valueConverters = [self valueConverters];

These lines are rather self explanatory. Since they’re executed in the context of a sub-class of ESModel they’re initalizing an instance of MyModel and pulling out the valueMap and valueConverters. The valueConverters method provides functionality to marshall any obj-c class from some NSDictionary representation (and by extension JSON representation). More on this later.

1
2
3
4
5
6
for (NSString *key in valueMap) {
        id value = d[key];
        if (value) {

            // skip NSNulls
            if (value == [NSNull null]) continue;

Next start looping through the dictionary representation and attempt to pull out values based on the keys of our valueMap. So in the case of our JSON we’ll get two NSStrings, one NSDictionary (model_2) and one NSArray (array_of_model_2s). I’ve never found [NSNull null] particularly useful and prefer the behavior of nil so we skip over those values. That said, your mileage may very and adding a flag to conditionally skip nulls is easy enough.

1
2
3
4
5
        // convert if there is a converter
        if (valueConverters[key]) {
            ValueConverter valueConverter = valueConverters[key];
            value = valueConverter(value);
        }

Next we check for a ValueConverter for this key. A ValueConverter is a very simple block that converts one object to another. The blocks signature is:

1
typedef id (^ValueConverter)(id obj);

In ESModel you’ll notice there is a method named valueConverters. This optional, overridable method returns a NSDictionary with all of the keys that should be converted from one representation to another. Looking at MyModel you can see it returns two keys: one for a single model and one for an array of models. These methods are convinentaly implemented for any model that inherits from ESModel. More on those in a second.

Finally, wrapping up fromDictionary we use KVC to set the property on our model based on the value from valueMap:

1
2
3
4
5
6
7
        // finally set the value
        [model setValue:value forKey:valueMap[key]];
    }
}

return model;
}

Next, there is a very simple method named fromArray that converts an array of dictionaries into models…as you’d expect it simply loops over the array and calls fromDictionary on itself.

The more interesting bits come after, the ValueConverter methods we touched on earlier. Let’s look at them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
+ (ValueConverter)modelValueConverter {
    return [^id (id obj) {
        NSDictionary *d = (NSDictionary *)obj;
        return [[self class] fromDictionary:d];
    } copy];
}

+ (ValueConverter)arrayOfModelsValueConverter {
    return [^id (id obj) {
        NSArray *a = (NSArray *)obj;
        NSMutableArray *newArray = [NSMutableArray arrayWithCapacity:[a count]];

        for (NSDictionary *d in a) {
            [newArray addObject:[[self class] fromDictionary:d]];
        }

        return newArray;
    } copy];
}

The first ValueConverter method is a straight forward utility method that utilizes fromDictionary. It lets us compose complex model graphs in our JSON responses and have everything convert to the proper models. MyModel uses this to convert another embedded model in its JSON to the correct class. The second method is similar to the first, converting an array of embedded models into the correct representation.

What’s Next

In part two we’ll cover the following:

  • Additional ValueConverter topics
  • Refactoring ESModel to automatically handle NSCoding and NSCopying

Stay tuned and feel free to send any feedback or questions to brian@quri.com. Thanks!

Understanding the User

Great products address a clear need, where the audience and their goals are easy to understand and articulate. If you can’t identify your user’s goals and motivations, you can’t design an effective product.

Quri offers a retail intelligence platform that audits retail execution using crowdsourced consumer data from our mobile app EasyShift. There are two audiences served - the task force collecting the data, and the customers who use it. The customer’s goal is easy to define - to gain insight into their retail execution. The task force’s motivation is equally clear - to get paid for completing simple tasks.

Because our customers lack the manpower to audit their product execution themselves, there is clear value in a service which can quickly aggregate this information. The dashboard started with basic job-based data (what was collected for a specific set of stores for a specific date range). As we collect more data the possibilities for new features which make use of that data increase - like the ability to see trends in data over time or to gauge the efficiency of corrective action. More features require a better understanding about how those features are used, how they work in relation to each other, and how the UI should be optimized to support them.

The problem is that the consumer package goods industry has never seen an offering like this, so until we acquire a larger customer base, finding users to give feedback on the dashboard design can be a challenge. Without direct access to users we have to find alternative ways to evaluate the designs.

We have analytics on the dashboard to identify patterns of behavior and preferred features of heavy users, but this only provides insight into the usage of our limited customer base. We always have access to proxy users - either in the form of internal users or third party recruits. But one of the best vehicles for gathering customer feedback is our sales channel. We include proposed feature designs in sales presentations and gauge customer interest. Some sales presentations include rich interactive demos of potential features, and this helps us gather feedback on how these designs meet or fail to support customer needs.

Fortunately the audience for the EasyShift consumer app is more accessible. We have analytics to monitor app usage, direct access to users for exploratory conversations, and a community forum in which users regularly volunteer information about their behavior, motivations, and problems. We have also used testing service usertesting.com for evaluating new features and better understanding user behaviors.

For example, heavy users tend to reserve several shifts at once, and have a tendency to travel from location to location completing shifts in a single trip. The interaction therefore needs to be optimized to support this behavior. Users have expressed secondary motivations for using the app beyond just the payment incentive, like the feeling that they have contributed to correcting systemic problems in retail execution. This kind of sentiment helps inform the way we communicate the messaging in the app.

Having a clear understanding of the audience of both sides of our product helps us to iterate our product designs to make the experience for both audiences even better.

Mobile Crowdsource User Community

Earlier this year we started exploring the idea of launching a community for our Easyshift users. The drive behind this was to allow users of Easyshift “shifters” to feel as if they were part of something larger and to develop a responsibility not just to us and our company but to each other.

We had some tactical benefits in mind as well. We thought “we can get users helping users” , “we can learn directly from users what their issues are” “We can have users share tips”. We also thought that some things might happen that we couldn’t anticipate but having people talk to each other would be a good thing.

The idea of community is in no way new but in the crowd sourcing space it hadn’t been done within the app itself and sponsored by the company. Our direct competitors, Gigwalk had a community and then pulled it down and Field Agent doesn’t have one. Mechanical Turk had a number of third party forums but nothing they directly sponsor and TaskRabbit doesn’t have any either. We didn’t really understand this as we were a crowdsource company, we believed in the idea that large populations could do lots of useful work. These other companies do as well but yet no one was facilitating any user to user communication. Each “worker” was in a silo.

Why not?

Well we had our own fears as well. What would people talk about? What if they said bad things about us? What if our competitors viewed it? Only a few people will participate and then it will die. Using community within an app is hard. What if they unionize?

Prior to working on the community we launched some surveys in the app asking if people would participate and what would they talk about. Over 60% said they would and this was across light and heavy users. They also suggested that they wanted to help each other, find out who else was “out there” and just socialize

Well none of our fears turned out to be true. We launched our community along with our ES 4.0 release. It has been a hands down all out success. Our community members were the first to tell us about issues with our release. They teach each other how to do shifts, lecture each other on good and bad behavior with regard to easyshift, discuss our competitor apps what they like and don’t like, they tell us they love us, they tell us they hate us, they convince each other to love us or hate us (ok they mostly love us) but we have hundreds of posts per day. And most importantly to our bigger goal- there is now an “us”. An EasyShift community where shifters feel they are part of something bigger than themselves.

The benefits for our team have been far greater than anticipated. Unlike a support desk people seem freer with their observations and the team feels freer in exploring the discussions and interacting with users. For our team discussions have been the go to place for learning what is happening with the user base, to see if the app is working, identifying what is going on with competitors. The feedback is instant, we can ask questions and get answers. Are all the answers statistically relevant, no, but its fast and fast is good.

The lesson is if you have gathered a crowd- let them talk to each other!!!

How we did it We developed the discussion groups internally and provided a minimum of tools. We weren’t really sure if anyone would use it so we wanted to limit investment. The basic tools were create a topic, post a comment. On the admin side we could view the discussions, hide a comment or block a user. We tried it internally for a few weeks and let it fly.

As discussions popped up- and we had over 50 topics in the first day we altered the tools a bit to allow for better administration, faster reading of topics and the ability to reply from our admin app. That caused us to start to think abut outsourcing the community function but a pretty thorough vetting of third party services revealed there isn’t a good tool for mobile app based communities.

We did develop some policies and rules for the shifters themselves. Those have helped keep things collegial. While we have identified who we are in the discussions, we haven’t limited who can post. Any team member can post with a question, word of advice and a reply.

Backbone.js Memory Management

Backbone.js is a lightweight JavaScript library that helps give structure to your frontend JavaScript code. In particular, your data is represented by Models and those models can be displayed with Views. While being able to decouple your code this way is great (as opposed to the jQuery selector/callback spaghetti code of olden days), it is easy to forget about memory considerations and properly clean up your views. In this post we’ll run through some examples of Backbone.js memory leaks, use Chrome profiler to help identify these leaks, and finally discuss ways to manage and clean up your views to prevent these leaks in the first place.

Examples of Backbone.js Memory Leaks

Lets say we have an index view and a “post” view for displaying a series of posts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class BackboneMemoryExample.Views.Posts.IndexView extends Backbone.View
  template: JST["backbone/templates/posts/index"]

  initialize: () ->
    @options.posts.on('reset', @renderPosts)

  renderPosts: () =>
    @$('#posts').empty()
    @options.posts.each(@renderPost)

  renderPost: (post) =>
    view = new BackboneMemoryExample.Views.Posts.PostView({model : post})
    view.on('postWasClicked', @postWasClicked)
    @$('#posts').append(view.render().el)

  render: =>
    $(@el).html(@template(posts: @options.posts.toJSON() ))
    @renderPosts()
    this

  postWasClicked: (post) =>
    post.set('title', "Post Title #{Math.random()}")
    @renderPosts()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class BackboneMemoryExample.Views.Posts.PostView extends Backbone.View
  template: JST["backbone/templates/posts/post"]

  events:
    "click" : "postClicked"

  initialize: ->
    @model.on('change', @attributesChanged)

  postClicked: ->
    @trigger 'postWasClicked', @model

  attributesChanged: =>
    @render()

  render: ->
    $(@el).html(@template(@model.toJSON()))
    console.log "post rendered"
    return this

In this contrived example, when you click on a post it updates it’s title with a random number, and all of the posts are redrawn. There are a few different types of events going on here. We have Backbone models and collections being bound to when updates are made (when a collection is ‘reset’, and when a model ‘changes’). We are binding to a Backbone view in the index page, and that ‘post’ view fires this event whenever it is clicked. Also, the Post View itself is binding to it’s own click event.

Here is what happens when we click on the Post view 10 times:

The expected outcome is only 11 log statements from the post render method (including the inital page load render). Yet because of a leak, it is being called in increasing amounts each click.

Normally you would not have a log statement like this in your render method, so this would have gone unnoticed. Using the Chrome profiler we can detect memory leaks and check the health of our application.

Using Chrome Profiler

Using the same example, lets look at ways to use Chrome Profiler to help us detect memory leaks.

In the Profiles tab in devtools, use the Take Heap Snapshop option to show how memory is being consumed by your application.

The initial memory consumption of the “posts” app is 2.4MB. I’ve loaded each post with a bunch of lorem ipsum text to help increase memory and help show changes as memory leaks.

After clicking the post multiple times between each snapshot, we can see memory growth:

`

When taking heap snapshots it is important to return to an initial state where you believe the memory should return to the base level (in this case 2.4mb).

Using the comparison option (located on the bottom bar), we can compare the last heap snapshot with an earlier one to help see what has changed. Digging in, we can see that the problem is in PostView.

Clean Up The Leaks

With Backbone.js (and any other javascript library), it is important to clean up after yourself. As in the previous example, memory can grow completely unnoticed. Not only that, but cpu cycles can be wasted because of bound function calls that are not cleaned up. The same code ends up running multiple times over and over. In this case it would be the render call, and it would just draw over itself multiple times completely unnoticed (except for the eventual slowdown as the app was used or left open).

Lets look at where we can clean up some events in the Post view.

Generally there are 3 types of events you want to clean up:

  • DOM Events (binding to onClick)
  • Binding to other Backbone.js Models and Collections
  • Binding to other Backbone.js Views

DOM events are easy and generally get cleaned up themselves as long as you remove the view (calling view.remove() delegates to jQuery’s .remove() and handles the cleaning up for you.).

For Backbone Models, Collections, and other Views, you’ll need to keep track of what you call .on on, and remember to call .off() when cleaning up.

Lets write a cleanup method in our Post view to help unbind from these events:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class BackboneMemoryExample.Views.Posts.PostView extends Backbone.View
  template: JST["backbone/templates/posts/post"]

  events:
    "click" : "postClicked"

  initialize: ->
    @model.on('change', @attributesChanged)

  postClicked: ->
    @trigger 'postWasClicked', @model

  attributesChanged: =>
    @render()

  render: ->
    $(@el).html(@template(@model.toJSON()))
    console.log "post rendered"
    return this

  leave: ->
    @model.off('change', @attributesChanged)
    @off()
    @remove()

And also add a little view management in our index page to help keep track of the subviews (named postViews in the code below) created and call leave on them when it is time to clean up:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class BackboneMemoryExample.Views.Posts.IndexView extends Backbone.View
  template: JST["backbone/templates/posts/index"]

  initialize: () ->
    @options.posts.on('reset', @renderPosts)
    @postViews = []

  renderPosts: () =>
    _.each @postViews, (postView) ->
      postView.leave()
    @postViews = []
    @$('#posts').empty()
    @options.posts.each(@renderPost)

  renderPost: (post) =>
    view = new BackboneMemoryExample.Views.Posts.PostView({model : post})
    view.on('postWasClicked', @postWasClicked)
    @$('#posts').append(view.render().el)
    @postViews.push(view)

  render: =>
    $(@el).html(@template(posts: @options.posts.toJSON() ))
    @renderPosts()
    this

  postWasClicked: (post) =>
    post.set('title', "Post Title #{Math.random()}")
    @renderPosts()

Now, clicking on the post multiple times, we get the expected outcome and the app runs much faster:

Backbone has also recently introduced two methods to help keep track of and clean up events: listenTo and stopListening.

Using listenTo, you can have the view itself keep track of all of these events, and then clean them all up with a simple call to stopListening:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class BackboneMemoryExample.Views.Posts.PostView extends Backbone.View
  template: JST["backbone/templates/posts/post"]

  events:
    "click" : "postClicked"

  initialize: ->
    @listenTo(@model, 'change', @attributesChanged)

  postClicked: ->
    @trigger 'postWasClicked', @model

  attributesChanged: =>
    @render()

  render: ->
    $(@el).html(@template(@model.toJSON()))
    console.log "post rendered"
    return this

  leave: ->
    @stopListening()
    @off()
    @remove()

These are just some basic things to keep an eye out for when using Backbone.js. Remember to profile your JavaScript regularly to help track down these types of memory leaks and keep your app running smoothly.