– we create awesome web applications

Machine Learning: Make Your Ruby Code Smarter.

I was giving this presentation at RailsIsrael 2016 conference. I covered the basics of all major algorithms for supervised and unsupervised learning without a lot of math just to give the idea of what’s possible to do with them.

There is also a demo and ruby code of Waze/Uber like suggested destinations prediction with fast neural networks on Ruby.

Slides are here.

Code is here.

Video and screen cast should be published at here shortly.

Migrating from Flux to Redux.

I was talking about migrating from Flux to Redux last Wednesday at Reacts Israel meetup.

Video and screen cast should be published at ReactJS-IL shortly.

TL;DR: When I started to work with React back in Apr-2015 there were many libraries to manage the application flow. I decided to start with classical FB’s Flux implementation to understand what’s missed there. Eventually react-redux and redux solved most of issues I had with Flux. This talk is about practical aspects of migration from Flux to Redux.

Slides are here.

Code is here.

Start with tag #flux, then #redux, then switch to master branch.

Content Creation Flow (5 mins reading time).

Today we’re going to talk about an effective way of defining new product features.

A product feature can be defined in two ways – from either a marketing or engineering perspective. The marketing approach means explaining how the feature benefits the customer, and the engineering perspective means explaining how that feature works.

Some product managers may miss this distinction and explain a new feature to their engineering team from a marketing perspective. As a result, engineering may work really hard and possibly proceed in the wrong direction, losing time and money for the company.

For example, let’s take the well-known concept of an online marketplace that sells several products and describe an exciting new feature from a marketing perspective. Our marketplace will now display a list of 10 featured products on its home page.

Well, engineering will have lots of questions. Who will feature the products? How will they be featured? How long will they be featured? Who will un-feature the products? All of these issues have to be clarified.

At Astrails, we’ve come up with the concept of Content Creation Flow. It’s a way of first explaining how data is created in the system and then understanding how it’s consumed, instead of vice-versa. Also, at any given time we can only use terms that were defined beforehand (i.e. objects that have already been created). It makes the concept a little harder to define, but so much easier to understand.

For example:

Facts - application admin users moderate the products; a products back office UI is used to edit/update the products.

Flow - application admins should be able to mark a product as “featured” through the back office UI. The ten most recently featured products should appear on the home page.

In this example, the creation process is described before the consumption process.

This makes it easier for engineering to understand and implement the feature. In our example, each product will have a featured timestamp. In order to feature a product, application admins will trigger the timestamp to be set to the current time. The 10 most recent products should appear on the home page in descending order of those timestamps. We’ll cache these 10 products and invalidate the cache when any product is changed or when a different product is featured by the application administrator. Crystal clear. For engineering.

This concept can and should be applied to wireframing and designing user interfaces as well.

Let’s assume again that we’re building a new online marketplace. The designer starts by designing a home page instead of starting with the inner pages, and presents a first draft showing 10 featured product boxes. Each box has a product image, product name and a short description.

Engineering then receives the homepage design and discovers that there is no short product description field at all. So, what now? Engineering can add this field, but they don’t have a UI for the seller to provide a short description. The web designer can remove the field and use the seller’s name instead. In either case, precious time has been wasted.

A better way to develop the product would be to design the seller’s product-editing page first. Once that is ready, designers will know that the only fields supported are product name and long product description. Long descriptions can obviously not be used in a small product display box, so the designer will use the seller’s name in the homepage design in the first place.

Everything is clear from the engineering perspective - the data to be consumed has already been created and the engineers know how to query it.

So, we discovered that feature specs are described more clearly using Content Creation Flow. Designs and UI can also be produced more efficiently according to this flow.

It turns out that engineering can use the same flow as well and save lots of time.

How? Let’s imagine the following situation. Engineering got specs for some new exciting feature. The specs are 90% complete. The engineers are ready to start coding but there’s an editing screen that hasn’t been designed yet. They decide not to wait. They’ll just seed some data, render other pages that use seed data and get back to that editing screen later. After a while, it turns out that the editing screen has more fields then they expected, with complex relations between them and seeds are not good enough. Now engineering needs to refactor the code and throw away lots of their work before they even start adding the new fields. Result: time wasted.

If they had started with the editing screens, all of the fields and relations that will be used later would have already been defined.

There’s nothing wrong with seeding some data for development, but it’s probably not a good idea to do so before it becomes clear what the data will look like. So, creation before consumption. Always.

If you have any questions about content creation flow, or any questions at all, feel free to email me at boris@astrails.com.

We usually use dragonfly to handle user generated assets in almost all the projects. But sometimes dragonfly with ImageMagick doesn’t play nicely in a limited environments like heroku.

We were getting tons of R14 - Memory quota exceeded errors after analyzing even small images using ImageMagick’s identify command.

Here is how we solved it.

First of all the context.

We use direct S3 upload on the client side in order to reduce the heroku servers load. Client goes to the Rails server with a sign request and gets back a policy, a signature and a key (aka path) of the resource to be uploaded to S3. There are more details about jQuery-File-Upload flow here.

Once the file is uploaded client goes to ImagesController and creates a record for the image.

class My::ImagesController < InheritedResources::Base
  before_filter :authenticate_user!
  actions :create, #...
  respond_to :json

  # ...

“create” action receives the only parameter original_image_uid which is passed to the model.

class Image < ActiveRecord::Base
  belongs_to :user
  dragonfly_accessor :original_image

  def original_uid=(value)
    self.original_image_uid = value
    self.original_image_width = original_image.analyse(:width)
    self.original_image_height = original_image.analyse(:height)
    self.original_image_size = original_image.file.size

This is where all the (image-) magic happens. Before the model is saved we analyze the image width, height and the file size in order to use later according to the application business needs.

First call to the original_image will download the file from S3, files can be upto a few megabytes, so it takes about a second in the production environment. Than original_image.analyse calls the ImageMagick’s identify command and cache its results.

So everything is quite straightforward. But, we started to get R14 errors on heroku after the images#create requests. We were under impression that some huge memory leak eats up all the memory, but it turned out that it was not garbage collected memory bloat that happens right after the identify command returns.

It looks like ImageMagick’s identify tries to get as much memory as possible with no particular reason from my perspective. So we had to fight these bloats in a few different ways.

First is to run garbage collection. Check out gctools, the only thing we had to do is to add these lines to config.ru

require 'gctools/oobgc'
if defined?(Unicorn::HttpRequest)
  use GC::OOB::UnicornMiddleware

It works with unicorn running on ruby 2.1. Learn more about it in the Aman Gupta’s blog.

And everything got back to normal, no R14 any more because the memory was cleaned up properly after each request.

But, why should we allow identify to take so much memory at the first place? And here comes the solution: passing limits to the identify command.

Another line of code added to initializers/dragonfly.rb

Dragonfly.app.configure do
  plugin :imagemagick, identify_command: "identify -limit memory 0 -limit map 0"

So, the ImageMagick doesn’t eat so much memory any more, and even if it does the bloat will be garbage collected after the request.

I’m going to start a series of short digest blog posts that will cover a few things worth mentioning. I sumble upon a lot of things reading different sources, here I will share the most interesting ones. Well, at least most intersting for me.

Here we go.



A browser for the HTML5 era Entirely written in Javascript. Free. Modular. Hackable.

The browser is really nice especially its minimalistic design. Chrome Development Console works as usual, so it can really be an alternative for Incognito Tabs/Windows when debugging web applications that require to work on flows involving different logged in users.


The ios_webkit_debug_proxy allows developers to inspect MobileSafari and UIWebViews on real and simulated iOS devices via the DevTools UI and WebKit Remote Debugging Protocol.

I’m really an Apple fun, but Google’s DevTools rock.

[http://macdown.uranusjr.com/] (http://macdown.uranusjr.com/)

MacDown. The open source Markdown editor for OS X.

I used MarkdownPro for a while, but this one is really cool and free.


iTerm2 is a replacement for Terminal and the successor to iTerm.

2.0 is released.



Detecting login state for almost any website on the internet

I’m not sure if it worths it to re-implement Location-based redirects in all the projects I participated but big guys probably have to consider this.


File encryption software that does more with less.

In short, users have only remember the passphrase, keys pair will be generated automatically based on the passphrase.


Where were a lot of buz about 2 charting libraries recently: http://fastly.github.io/epoch/ and http://www.chartjs.org/. Both are cool, I’m looking forward to testing both of them in a next project that will require charting.


Component Kitchen - Great ingredients for your web apps

Tons of useful code, all searchable.


Custom Elements - a web components gallery for modern web apps

More or less the same.


Welcome to the future - Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself.

Most exciting thing for front end development ever happened since Backbone.js. I’m trying to use it in the mobile verion of http://isratracker.com which will comes out shortly.



Google Noto Fonts - Beautiful and free fonts for all languages

Beautiful indeed, looks even better then Roboto.


Inside our Brand Evolution

An interesting reading about airbnb rebranding.



How Yo became one of the most viral apps of all time — step by step


2013 Logo Trends

I had a lot of things to do last Thursday, Feb-17. I met a friend from abroad 3am at Ben Gurion Airport and spent several hours talking before we went to sleep, signed a contract for developing killer web app at 1:30am, and finally gave a presentation at The Junction at 4:30pm.


Vitaly gave an interesting presentation about MongoDB at Database 2011 Conference.

MongoDB. NoSQL for SQL addicts.

Slides are here.


We presented on IGTCloud Ruby On Rails Day today.

Agenda was a bit different this time, not only technical presentations but also a few words about modern approach of building web applications.

Find the slides below.


This is going to be the first part of a blog post series about javascript widgets.

First type I’m going to cover is Popup Widget. Sometimes it’s called Popin Widget because there is no actually new window that pops up, instead the content is shown IN-side a current page. The idea is quite simple: you provide some html/js snippet to other sites. They put it into relevant place, and you have some functionality of your site running there.


RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data. Use it to write your custom monitoring shell scripts or create whole applications using its Perl, Python, Ruby, TCL or PHP bindings.

Let’s run it with Ruby on Leopard.

sudo port install rrdtool

Default ports installation comes without ruby bindings.


Thanks a lot to Amit Hurvitz for providing a file of Virtual Disk Image (VDI) of VirtualBox, containing an up and running JRuby on Rails on Glassfish with Mysql. Image also contains some examples (actually solutions to the code camp exercises), all running on top of an OpenSolaris guest OS (can be run on many host systems).

Grab the image ~1.5GB archive.

Grab the exercises ~9.7MB archive.


We participated in JRuby on Rails with GlassFish Code Camp hosted by Sun Microsystems Inc. I was speaking about the framework in general trying to infect Java developers with Ruby On Rails. Slides are available.

Amit Hurvitz gave exciting presentation about GlassFish and short introduction into DTrace. Find out more details about the Code Camp.


I was the last person in our company working with ERB to render templates. While all the rest switched to HAML. At the beginning it was quite hard for me to read HAML comparing to ERB. HAML looked for me like some completely alien thing with weird percent marks all over the place and the significant whitespace never did it for me. On the other hand ERB felt like warm home after years we spent together.

Until I did the switch.


2008 was the year when we finally switched to full time consulting. And like all consulters we faced the problem of correct pricing. There are two well-known ways to charge a customer: per-hour rate and fixed bid quote, and several combinations of them.