Backup files and paths to S3 with write-only keys

Lately I’ve been doing some maintenance to several servers, most of which had to be just turned off since legacy services had no longer any reason to exist.
I don’t know what about you, but for me, when it’s time to turn off a VPS, I always feel a bit anxious. Even though app repository and database are already backed up for archive, you sometimes stumble upon snowflake server configurations or application logs which are not backed up and that may be of interest in the future.

In such cases I used to backup those files locally on my laptop and then move them some where depending on the specific situation. Sometimes it was a CD or DVD, sometimes some other kind of medium. Sometimes I though it would have been tremendously useful to move those files from the server to S3 directly, in some kind of backup bucket.

Other times it was just the need to have a quick way to send a bunch of files to S3 directly, say for periodic backup of databases or filesystem snapshots.

Then I though about security issues related to keeping S3 keys on those servers. If for any reason a host was compromised, to lose control of a key that allows anyone to read everything from that bucket would be a mess. Bacukups very often hold all sorts of sensible information and the idea to have to deal with such security concern was just too much.

S3 and write only keys

I never really developed a standard procedure for that, until few days ago. In fact I though about the possibility to have write-only keys on several servers, and a kind of script to allow you to just send files to S3, with no possibility to read anything.

That sounded great to me. As part of a standard setup for every host I could configure the following:

  • a configuration file with S3 write only keys and bucket name
  • a script suitable to be used with S3 write only keys

In the beginning I considered to use a binary like s3cmd for this purpose, but I found it was not playing well with write-only keys. Then I decided to build my own script. It was actually very easy with few lines of Ruby to come up with a script which was doing just that: read a path from the command line and recursively push the tree to S3.


Sink3 is available here on github. It’s in such an early stage that I felt a little bit uncomfortable even to write this post. But then I thought “hey! it’s working after all.”

Here is what it does:

  • it uses the hostname to create a root folder on S3
  • it creates a folder from the current date inside the hostname folder
  • it copies files or paths it receives as arguments inside the date folder

Working this way it can even be used to perform periodical backups. Example usage:

assuming a host named tiana

What you get in the bucket is:

nice hum? You don’t have to worry about anything else other than to avoid conflicts in filenames. That would overwrite what you backed up previously.


What I’ve learned from the Rubik’s Cube

Around the last year I started playing with the Rubik’s Cube. I’ve always been fascinated by that toy (I don’t know if toy is the proper word for it), but I never had the chance to play extensively with it. Maybe because I never owned one. One day I was wandering through the shelves of a toy store, I saw one and brought it home.

I started to play with it during the summer time, as a way to relax myself while doing something manual other than video games. I tried do understand how it worked and tried to come up with some reasoning on possible ways to solve. I was able to solve the first layer – yes, I thought it was a good idea to try to solve it by “layers”.

In few days I was able to solve the first layer very quickly, it’s not that hard. Then I started to think to the second layer, slightly harder, but I was able to craft my homemade recipe to make the second layer too. Sometimes it was working, sometimes not. I started to get frustrated. At a certain point I gave up. I told myself it was basically too hard for my brain to go ahead with it and I asked YouTube to show me something.

You won’t belive that, but I was surprised to find out that YouTube was actually full of videos of people showing you how to solve the Rubik’s Cube. After few search results I saw that a lot of people was talking about solving it by layers and I felt proud of myself for the little achievement of starting the approach from the right perspective.

I started to focus on a video explaining how to solve the second layer. Thinking back to those days, I remember it took me a huge amount of time to learn how to make the second layer. I found it really hard to grasp those 3D movements. I started to practice on and on with the first two layers. Solve it till the second layer, mess it up, start again. This was for a long time my favourite relax practice.

Once you solve the second layer, there are at least other four techniques to follow to solve the remaining layer:

  • the yellow cross
  • the oriented yellow cross
  • the corners in place
  • the corners oriented

Those are the ones I learned and the ones that I keep using today. I suppose there are many others, maybe faster and more complex ones, but I’m not interested in learning them for now.

The other day I was playing with it while being absorbed in other thoughts, when I realized that no matter how much I practiced, there was always a certain degree of effort I had to put in it. It was like if, while I was getting better in speed and precision of my movements and the “automation” of those movements, the amount of effort I had to put in the recognition of the “rule to apply” was somewhat constant.

I could be completely absorbed in other thoughts while doing the easy parts, for instance moving blocks around when the pattern to apply was recognized, but I had to focus a lot when I had to understand what was the next step required.

I divided the mental activity required to solve the Rubik’s Cube with those pre-learned techniques in mainly three areas:

  • Pattern recognition
  • Rule representation
  • Rule application

Pattern recognition

This requires your brain to read the colors and positions and to scan your memory in order to find a matching pattern. This is quite expensive as it seems, since I can’t completely focus on something else while doing that.

This step is completed when your brain understands at what point in the solving process you are. For instance, when you say: “the next step is to make the oriented yellow cross”. Let’s call this “high intensity”.

Rule representation

By “rule application” I mean to figure out what are the movements to be done in order to complete the rule. This is what you should have clear when you start to move the cubes to place them in the desired positions.

For instance, this step is when you say: “ok, in order to make the oriented yellow cross I have to do this series of movements”.
I’m not sure this is actually a step by itself because I admit sometimes I don’t even have to think about it. If this step exists, it’s for sure very short lived and melts into the third step. Call it “medium intensity”.

Rule application

This part is basically all mechanical. The part of your brain working on this I guess is the “cerebellum”, the one you use for all movements you already master, like walking and typing on a keyboard. During this step I mostly have to focus on the precision of movements.

It’s the Zen part, when you want to move your fingers as precise as a robot would. Like the people you see solving the cube in less than a minute. I’m not good at that, but I don’t blame myself to much.

That’s the part I enjoy the most actually, it’s the part I can do completely disconnected, thinking to whatever else. Sometimes I can even close my eyes and still make it right. Let’s say this is “zero effort, high amusement”.

After coming up with this kind of theoretical separation of scopes, I started to ask myself if similar scopes can be matched in some field of everyday life.

According to the above schema, it would make sense to think that for the human brain to recognize patterns is somewhat expensive. To think about “what to do next” is not so hard when you have a set of applicable solutions to the problem.

To apply the rule may be more or less hard depending on the kind of physical activity involved, but it’s absolutely something that can be automated and “delegated” to peripheral areas of the brain.

The conclusion is that I was able to find several examples of such scopes applied to real life activities. To talk about those would require a separate post. For now I leave you with these few thoughts. I would love to hear what you think about it.


HelpScout Free Plan Review

Today I’m writing about HelpScout. If you haven’t tried already, you can go there and setup a free account. You’ll be surprised of how much it resembles your inbox.

In my previous post I wrote about What I liked the most was the configurability of languages, assuming you are in the need for a multilingual support, and the wide range of options you have to set up your workflow. HelpScout goes in another direction.

At Exelab we were looking for a way to streamline communication with our clients. Too often it happens to receive support emails and to reply to them without carbon-copying anyone in the team. Ours is a small team, with very intense communication between us (even if we are a distributed), so using an help desk software always seemed to be an over sized approach. We know tools can stand in your way very easily, furthermore we always thought helpdesk software is perfect when you have a product, while less suitable when you are doing just consulting. Add to this the fact that most helpdesk software bill you per agent and you’ll have the full picture of why we didn’t use any of those before.

Recently we started to rearrange our internal workflow, in order to make it easier to deal with clients (and for clients to deal with us). Se we decided to give a second shot to someone of those tools. We heard good things of HelpScout and this is how it all started. We signed up for a free plan and started to play with it. To go straight to the point I think HelpScout is the best option around for our setup.

I was really impressed of who much it resembles a webmail.


I love the way they arranged navigation on the page. It’s simple and focuses on the important things. You can’t get lost. I don’t remember the onboarding to be invasive (I don’t remember it at all actually), yet I was able to find everything I needed very quickly.

HelpScout is a hub for your team’s email communication. It’s like being altogether using the same email account in front of the same computer, with the addition of a few useful tools: you can assign tickets to other team members, mark a conversation with four states (open, pending, closed, spam), you add tags.

You configure an email address to receive inbound messages (you need to have that email account working separately) and you are done. You start talking to your clients through HelpScout right away.

Customer records are automatically created from incoming messages. A little form allows you to create customer records if you want to write to someone who doesn’t exist yet in your database.

One thing I didn’t like about was this step of creating customers. It’s distracting, you are presented with an endless list of input fields, of which you just need first name, last name and email. It never happened to me to input anything else in Of course it depends on the kind of helpdesk you are working on. The more complete the tool is, the more options you have to deal with, which makes the tool less pleasant to work with, unless it’s designed extremely well.

With HelpScout you experience the complete opposite. The options you are presented with are just those you need. This makes the whole experience smooth and effortless.

Another winning point in HelpScout is that you can edit inbound messages. When we started to use it, we setup a dedicated email account for it (support). Clients were still talking to us using our own email addresses. In order to have the conversation flowing through the new tool we decided to start forwarding messages to support and reply from there for every support email we received.

It did work so well that I think someone at HelpScout spent time thinking about this exact workflow. When you forward an email from, say, Gmail, you an this block of text at the beginning of the email body with info on the forwarding.
If you put this block of text at the first row of your message, HelpScout will use it to populate the issue. You’ll have the original sender recognized as the customer and I guess also the conversation title will have the “Fwd: ” prefix removed.
That’s it. It’s like if your customer wrote directly to HelpScout.

Edit incoming messages

One thing I appreciate very much is the possibility to edit incoming messages. Sometimes you want to create a ticket from an incoming message that includes non relevant blocks of text, or a typo.

This feature is also very useful when you need to reformat forwarded messages. It’s easy, click “edit” and edit. No warnings, no alerts. HelpScout treats you as an adult. It just assumes you know what you are doing.


At Exelab we use HipChat. Needless to say we immediately linked HelpScout to HipChat. It sends immediate notifications and, more importantly it uses mentions! Maybe matching email addresses, I don’t know, the fact is that it knows that if a customer replied to you in HelpScout, it has to mention you in HipChat. It’s impossible to miss a conversation this way, also because very often one has email notifications active for mentions in HipChat too. One thing I’m considering to do is actually to turn down some email notification from HelpScout (which are very configurable).


In the recent rearrangement of our workflow we also started to use HighRise (which I’ll talk about in a future post maybe). HelpScout and HighRise integration allows you to do the following:

  • save customers from HelpScout to HighRise (the opposite doesn’t seem to be possible unfortunately)
  • configure auto Bcc for yourself, to make all your messages saved to HighRise too.

Being HighRise a complete CRM solution, it’s very valuable to have the history of messages saved there too. You can for example aggregate customers in “companies” and “cases” so to have a more complete view of a conversation going on a particular project. HelpScout and HighRise seem to complete each other in this sense.


I wish I stared to use it before. It solves a problem with very gentle learning curve. Your inbox is not the right place to deal with support requests, unless you work alone of course. You need all the team to be able to know what’s going on in your organization. HelpScout makes it easy, and integrates smartly with the other tools you already use. Even the free plan gives you big value.

We are very focused on defining processes to scale our business and provide a better service for our clients. So far HelpScout seems to be the best fit for our needs. The free plan seems to give us all we need already. We don’t use a knowledge base yet, which is not included in the free plan anyway.I think we won’t hesitate to switch to the paid plan (they have just one) as soon as needed.

Give it a try, you won’t be disappointed.

How to setup Zimbra to forward missing recipients to Google Apps

This tip is to configure a Zimbra instance to deliver messages to Google Apps in the case the recipient email address is not found on the local server.

This is the scenario:

  • you have an email domain on Zimbra with an email account:
  • you have Google Apps configured to use Zimbra as a secondary server. This means that Google Apps will be configured as the only MX dns record, and will forward all unknown recipients to Zimbra as a fallback.
  • you have an email account configured on Google Apps:
  • when you are logged into Zimbra webmail (or use an IMAP client) in you want to be able to forward messages to

In normal conditions, Zimbra will reply with a message telling you that the recipient is not found.

The following command will tell Zimbra to forward every message for the given domain to Google Apps:

I searched for ways to apply this configuration through the administration interface but I wasn’t able to find anything. So running this at the command line seems to be the only option.

Then, restart postfix:

From now on, every message for domain will be forwarded to Google Apps first.

If you are a Zimbra guru and you know a better way to do that please drop me a line in the comments.

[VIDEO] I’ve fallen, and I can’t get up!

Computer graphic is black magic to me.

Ultralight migration

5 Lessons learned from a CMS migration

At the time of writing, my laptop is running a migration script. It’s evening and I’m quite tired because I spent all the day trying to make it work after almost one year from it’s last commit. Yes, one year.

We decided to migrate this client’s site from our own homegrown CMS made with Rails 2.3 and MongoDB to WordPress. The migration took almost two weeks to complete. Then, for various reasons, the final switch never happened. Our client kept using the old CMS for one year more. We kept doing little changes to the CMS. Little incompatibilities arose with the migration script and here I am now, trying to adjust this bunch of classes to make the old database fit the WordPress back office.

If I’m lucky this will be the last attempt. It’s running, I’m tailing the logs. It’s a humbling experience. Something feel just not right today. I guess I can write down some lessons learned from this experience.

1 – Give the right value to time

This database is pretty big, has thousands of articles and attached images. At the time, I didn’t event think to write a multithreaded script to speed up writes. Speed was just not a concern. The migration was enough complex already for me to think to implement it with a multithreaded approach too. New, reading back my code I feel like I could have done several things run in parallel with little effort and great benefits on the execution time. Also considering what I’m about to write as point #2.

2 – Try to apply incremental changes

I’m not sure if I did it or not at the time, but I’m almost sure I didn’t. This migration script is not incremental. It runs expecting to find an empty destination database each time you run it. Which in conjunction with point #1 it means that if for some reason something goes wrong at a certain point, you have to fix, start over and you’ve lost a lot of time.

To plan for incremental changes would clearly bring more complexity to the whole, but it can of course save you a lot of time. You have to find the balance between complexity and speed. Also consider that having incremental changes can allow for a smoother switch between the two systems.

3 – It’s not different from any other project

I don’t know why, but for this migration I didn’t write a single row of test. I use them when doing my daily job on coding web apps. Tests first, red-green-refactor. I’m not a fundamentalist of TDD, so I’m ok with writing tests after, too, to do spikes to try viable solutions. In this case I didn’t leave a single line of executable test. Which is weird because there are several kinds of post types I could have mocked up very easily to ensure everything was going to fit into the right field in the destination database.

I guess that what made me think it wasn’t worth to write tests was that I thought of it as something disposable, a one shot thing. Run it, migrate the site and never use it again. I don’t know, code seem to stick around for an incredible amount time. Deals shift, sometimes by years like in this case. It’s not infrequent to have to put your hands in code someone else wrote years before. So be wise and keep everything clean. There’s no disposable code. Disposable code is command line, and sometimes even that you frequently reuse (CTRL-R anyone?).

4 – It’s hard to stay focused with a long running script

I knew this already after years practicing TDD. If tests are fast, you work fast. If tests are slow you work slow and the ratio is non-linear. So the faster the migration the faster you can fix bugs and deploy changes. It’s not different from any other project I’ve been working on. Naturally, you should privilege correctness over speed but… here comes my lesson number 5.

5 – Embrace concurrency

Always remember that you have many cores and database are already optimized for concurrent writes. You should try to leverage this as much as possible. Ruby doesn’t make it easy to think in terms of concurrency by default. Luckily there are several projects which help in that, allowing you for an easy switch to a multi threaded approach. If you are familiar with Sidekiq you know what I think about.

Sidekiq is largely adopted to offload the HTTP layer of web apps, but is a perfect fit for many other scenarios too. In the case of a large database migration for instance it would fit very well. If I were to rewrite my script today I would use it for sure.

The point is that you should think in threads by default. It’s mind shift you can’t postpone any longer. It you did it already, good for you, honestly. I’m starting today.

Image Credits: Flickr
Screenshot - 11042014 - 10:36:50 PM

A short review of

In my previous article I wrote about the five most important things every Sass should start with. Here I want to focus on what’s the product we at News@me use for customer care and it’s features I appreciate the most.

Don’t want to keep you on your toes, it’s Desk. Desk is an online software for helpdesk management, that according to its homepage (at the time of writing), offers “Fast awesome customer support”. I think part of the awesome depends on you more than on the tool. On the speed, I have to say that it keeps the promise.

We use it primarily for the following:

  • collect help requests from inside News@me for authenticated users
  • collect help requests from outside News@me, from people who is not signed in yet
  • open tickets for every email delivered at
  • publish knowledge base articles in Italian and English language

And of course to manage all the ticket’s lifecycle.

Multilingual support

One thing which made us decide to go with Desk was the multilingual support. It’s really everywhere, in the ticket management, in the documentation, everywhere.

It was very important for us from the beginning because we wanted to deal with both Italian and English speaking customers. So having the possibility to publish help pages in both languages was a requirement.

Talking about documents, the translation system is really smart. When you write a document in the main language, let’s say in English language, it shows whether or not you are missing other languages. If the main document changes, it informs you that the Italian version may be out of date.

Such a feature is probably something you appreciate the most when you have hundreds of doc pages, you’d say. I don’t think so, it’s pretty easy to forget to update a translated version. For us, given we plan to introduce other languages pretty soon (I know dear american readers, may sound strange to you), the mess can easily be multiplied.

The help desk backoffice

I admit, the first time I saw it I was not particularly enchanted. But I’m maybe to much sensible to the latest trends in web design. I’ve used [Zendesk][4] in other projects, which did invest quite a bit recently in more attractive user interface for agents.

Desk is far from there AFAIR. But, all the controls are well placed on the page. You can’t get lost. One cool thing is that it’s easy to navigate through tickets even in the same browser tab. They simply arrange on the top of the page as if the were tabs in the tab, allowing you to easily navigate through them without having a separate browser tab for each.

The left third of the page is for customer and ticket infos, while the right side is for the conversation itself.
There’s everything you can imagine you’d need when dealing with support tickets, so I won’t say the obvious.

Quick codes

Among the few things I use repeatedly each day some are a real time saver. Canned messages are one of those. You can write documents as you would do with normal KB pages, then you mark then as “canned responses”. Those can come with a quick code that you decide. Quick code allows you to have the input field in the ticket reply to be populated in a snap with the document’s text.

That’s nothing special per se, the awesomeness is that it works across different languages. You can assign a language to the current customer (yes, you have to do it manually, it doesn’t even try to guess the customer’s language, unfortunately), you type in the quick code, and the corresponding translated message fills in the text area.


I’ll be short here. Enough to say that you can customize just everything. You can dress your doc pages with the most exotic web design you can imagine. If your clever enough your users won’t even recognize they navigated away from your app to the KB.


There are currently three major integrations we have with Desk right now.

  • single sign on
  • inbound email messages
  • HipChat

Single sign on

Normally, in order to submit a ticket in Desk (as in other ticket support system) you have to fill in a form and leave your email address. This can be annoying for your customers who are already signed in. The single sign on feature allows you to show your customers just a text area to leave the message and a submit button.

In practice, you are telling Desk that you gran for the identity of this user. You have to setup an endpoint on your app to use a shared key and HMAC signature. No rocket science. This is mandatory if you want to provide your users with a seamless integration between the documentation site and your app.

Inbound email messages

A.K.A. the support mailbox. You may want to setup one and configure Desk to pick up messages and open tickets from a specific email account. This is a key point if you want to broaden the input channels (and you want, right?).

Sometimes it happens for the less tech savvy member of the team (i.e. the marketing dept.) to tell customers to use email as the quick way to ping the support team. An inbound email channel is the way to have those messages picked up and transformed in open tickets. No more email driven conversations with customers.


Or whatever you use as a team hub. I’m pretty sure the most common third-party tools are supported. That’s maybe non vital, since agents receive emails anyway whenever a new ticket arrives, but sometimes it’s nice to have tickets appear in the chatroom too. If nothing else, it’s just an occasion to make jokes altogether on ticket subjects.

What I don’t like in Desk

There are few things I don’t like about Desk. The first is the Live chat feature. It never worked well for us. Maybe we did not understand it, but nowadays you want something like Olark on your website.

Some people started a live chat with us on Desk, such conversations were delivered to out inbox as open tickets, but we were never able to start a live chatting session with them as it should be. Maybe live chat is not for us. In the end we removed it.

Another thing I don’t approve is the limitation on the number of agents in the team. You have to pay for a higher plan if you want more people in the team. We are a small team of people, we are four, but the limits for our plan is 3. One has to stay out. That’s the only reason we should buy a higher plan. Odd. But it looks like it’s a kind of industry standard, if you know of someone doing it differently let me know.

Final vote

I don’t know what’s the current state of Desk’s competitors. I’d recommend to start with Desk to anybody with multi-language requirements like ours. I’d also recommend it for the uptime, for the quick replies I had with the few support tickets we opened with them, and the overall reactivity of the interface.

Found this useful? Please drop be a line in the comments on ping me on Twitter @fregini.


The five pillars of Saas

If you are thinking to start your own Saas product you’d better understand from the beginning that what you see in other people’s product, what you perceive and what you can touch is not even half of the job. It’s way less.

Saas products are made up of thousands of features most of which an average user (the one I was untill recent years) is totally unaware of. Maybe you are a coder, a smart one, but even with good programming skills you may not immediately realize what’s the burden a Saas owner or a development team has to take to make the things move in the right direction.

A nifty text editor, live page updates, complex analytics, all the interesting and cool features, the part you look forward to start coding, well, they come after. What comes first is a planning job and the setup of the infrastructure. Here is a list of things every Saas owner should take into account before starting to give tasks to people.

1 – Checkout process

The first and most obvious thing which is often forgotten is the checkout process. It’s a pretty common mistake for founders to start giving away features for free, with processing time and storage being paid by nobody (read being paid by themselves or investors) with the idea to test how the product behaves and only then think about asking money for the provided service.

This approach is clearly wrong most of the times. The first reason this is wrong is because until you have paying customers it’s hard for you to understand how much your audience is really appreciating your service.

There may be times when it makes sense not to think about billing from the beginning though. For instance you may be putting some service online with no idea on how the architecture is going to react to actual use by your audience. Other reasons may be your business being oriented more towards advertising or other business models.

In general it’s better to start with a way to earn money in a way or another. It doesn’t have to be fully automated. You can set up a hybrid solution and be fine with that for several months. Just have a ready solution to bill your customers’ credit cards.

You may be tempted to think it’s just about the payment gateway. This does not end just by adding Stripe or Paymill to your site.
You must have clear what’s the ideal path to move moeny from your users’ pockets into yours.

2 – Customer care

If taking money from your customers is vital for your business, to provide them with a pleasant and well-organized conversation is even more important. Sooner or later you’ll start receiving messages from people for the most disparate reasons. If it doesn’t happen, it may be the sign that your product is not arousing any interest in anyone on the Internet — not a good sign.

So expect — and pursue — conversation. It’s vital to the business. How to run conversation with your customers would require an article apart. For now it’s enough to say that in the best case you’ll be 50% wrong about what people find interesting in your product.

Remember that starting a business is not like launching rockets. You have to by hyper tuned on the your customers frequency. You’ll find in their questions what they were actually looking for when they signed up for on your site. Learn from their misuse. You’ll find use cases you’d never think possible. With them will sometimes pop out a new unmet need and new business opportunities.

3 – Knowledge base

Not everybody likes to search for documentation online when they are stuck on something. Many people will just drop you an email and wait for an answer. Somebody won’t do that neither, they will just turn around and be gone forever. Other people will greatly value instead the online documentation you’ll provide. It doesn’t matter how good you help desk support.

An online knowledge base is something you must do to provide a business grade Saas, something your customer can rely on. A user may face a problem at late night, off the office hours. For how long are you going to answer tickets on Sundays?
Help desk and knowledge base are two sides of the same coin. Customers use them because either:

  • your user interface is not clear enough for them to understand how they are supposed to use it (you can work it out but you can’t make everyone happy).
  • something in your software is not working as expected (or they think so).

Your knowledge base should hold the answer to such questions. It should describe the expected outcome of a certain activity, so a user can judge if the software is working correctly or it’s just them misunderstanding. Should tell the user that what they are desperately looking for in the Analytics menu is actually located under the Your account menu. Or simply that what they are looking for is not present. They should be able to come up with an answer without having to wait help desk support time, for what long or short it will be.

Over time you’ll probably notice patterns in what is being asked on the help desk channel. As soon as you do be prepared to pack up a page for that Frequently Asked Question.

Another reason why you should invest in your knowledge base is because it’s free content which increases your footprint on the web. Be sure to have SEO optimized pages. Write a lot of them. They should talk about your product better than anyone else.

4 – Outbound communications

Here is when you reach out for your customer feedback. Some people is very extrovert. They’ll type something in an input field just because it’s there. The vast majority will instead just think the time and effort to leave a feedback is not worth. Among those is a part of your paying customer base.

Consider that anyone doing a sign-up on your website is looking for something. No one does a sign-up just because you put a form in front of them.
They’re maybe just curious to know what’s the product about, or they may be really interested in solving a problem they are struggling with. In both cases there may be something they don’t think is important to say to you. You have to strive to understand what goes on in their head from the moment they get inside your virtual door.

Ask yourself: how would you behave if you were running and hardware retail store?
A customer comes in, he starts to look around, maybe look at some price, weigh something from the shelf. What do you do? You kindly approach and ask: “May I assist you in something?”.
With Saas it’s the same, more or less. You need to find the way to interact and grab some feedback from whoever comes in. Yes — youd’s say — but how can you recreate that same feeling online?

First off, you can use pop ups to inform you are there to help. That you are present, in each page they visit, whenever they click around. Another technique, which can only be applied after sign-up, is to collect user accounts into courts and send them emails asking for a feedback.

You can start from the ones who signed up last week, and never signed in again. You have to be creative and setup whatever you think is the most kind email to let them know that you care about their opinion.
Do you feel like you’re stressing someone, feeling spammy? Well, they signed up to your service after all. Are you afraid they’ll be annoyed and unsubscribe for good? Well, maybe. What are you going to do with unsatisfied customers anyway?

5 – Analytics and internal reports

As already said, what your customers will see about your product is something like the tip of the iceberg. Take into account – and plan resources accordingly – that you have to develop logging procedures and facility to be able to answer quickly to the most disparate questions. For what concerns the user’s activity on your application, and what concerns the system performances. The following is just a quick list of questions you may need an answer for:

  • how many people logged in today? yesterday? During the last week? (this is easy)
  • how many people I can consider active users? (what is an active user by the way?)
  • is people doing everything I expect them to do in my app, or they just sign-up and quit?
  • is feature X really appreciated? How many people is actually using it?

Just an example. Other questions may focus on the system itself:

  • is my application perceived as “fast”?
  • is anyone receiving errors?
  • what’s the growth rate of my database? How much time I have before a migration to a larger hosting is required?
  • what’s the slowest part of my app?
  • who would the system react to double of the current load?

and so on and so on. The interesting part is that you’ll need to mix up those two separate domains to come up with important decisions. For instance:

  • is my application perceived fast by paying users?
  • is there any user who is experiencing more errors from my app?
  • is feature X really appreciated by user Z?
  • does the Bronze plan take too much of my storage space?
  • who would the system react if I’m to improve the trial period by three weeks?

Too many founders either don’t ask for such questions or guess about it, simply because the time required to code the infrastructure for that is non trivial, and they are absorbed the interesting part which is to prove the Internet they are right.
What I understood lately is that the cool part of the coding job is this. Being able to answer such questions. Business changes, features come and go. Analytics will be extremely valuable for one person at least. You.

Summing up

Whatever Saas product you are thinking about, whatever technology are you going to use, what’s listed above you can’t miss or leave at the case. You’ll find plenty of providers offering such services as B2B Saas. In the next articles I’ll try to provide a list of what we at News@me are using right now and why, what the benefits are, what experiences we had in the past and what we learned.

It doesn’t matter if you are going to start by relying on third-party providers from the beginning or you’ll arrange something with free online tools, or even develop your homegrown solution.

Those are the pillars, they have to be there from the beginning and you’ll have to master all five. After that, you can still do wrong. But at least not everything will be lost. The pillars won’t let you crumble.

Did you find this article useful? Drop me a line in the comments or reach out on twitter.

Image credits

Migrate your blog from Jekyll to WordPress in 3 steps

Long story short you have to use a RSS 2.0 feed. Here is the flow I followed recently and that worked for quite well. This actually did not import images. It was not a big issue in my case since I did have very few of them and doing a quick review was enough. If you find better ways to do this please drop a comment and let me know.

Install RSS 2.0 feed in your Jekyll site

My installation didn’t have an RSS feed, it had an Atom feed which unfortunately didn’t seem to work for this purpose. Find an RSS 2.0 pluign and install it into your _plugins directory. If you don’t have a plugins directory make one.

I used this one. It requires two configuration items in _config.yml wich I didn’t have: name and URL. Be sure to have those in your config. Restart Jekyll and visit /rss.xml on your local site.

You don’t even need to push this change to production, since the WordPress importer we’ll see next will ask you to upload a file.

So download this file from your local instance doing something like

Keep the resulting rss.xml file ready for the next step.

Install an RSS import plugin on WordPress

This import process is going to use an RSS feed file as source. Install this plugin

Not much to say here. Move to the RSS import tool and upload the file you generated at the previous step.


You should see a long list of imported successfully lines.

Import images

This is boring part. If you find more clever ways to do that please let me know.

Your images are likely to be in your Jekyll images folder. Open that folder, select all relevant images and drag them into the WordPress media library. You can image what the next step is: relink images by visiting each post. Delete the image placeholder which will be broken at this point, and add back the corresponding media item from the library.

As I said this was not much an issue for me because I didn’t have many images. If you have tons of images and this step is not a viable solution I’d suggest a couple of possibilities:

  1. Work on the rss.xml file, so that once the images are uploaded in the WordPress media library, the path will match and they won’t be broken paths.
  2. Spend some more time to find a more clever importer so that linked images from your domain as actually imported in the library, and maybe the image src changed according to the new path.

Other issues

I found posts to have line breaks where there shouldn’t be. I don’t know why, but anyway I had to review the posts by hand and fixed it during the process.

Another thing you may want to check is the code blocks (code samples and the like) which you may find are not appearing as the were used to. I’m using the Crayon code highlighter now.

Let me know how your migration goes and if you found this post useful. If you catch me in coffee time I’d be happy to help you out.

About Changelogs

Lately I realized a thing which made me sad: changelog formats are far from standardized.

I stumbled upon this thought while thinking about a way to have bundle outdated to print out a kind of changelog excerpt to help me get an idea on whether I had to update a gem or skip the latest version for this time.

The first step to achieve this would be to have a kind of changelog parser to extract machine readable data. So I started to look around for a changelog parser. I hardly found something, which somehow surprised me.

The official CHANGELOG format

I wasn’t able to find an official spec for changelogs. I found instead a page hosted on
which lists a few advice on how to create a changelog file. It’s venerable. It describes few rules especially suited for programs written it C language.

It lacks of several directions I was expecting to find. For instance it doesn’t say anything about version numbers. It’s all about dates of changes.

Different formats and conventions

I decided to have a deeper look at several popular OSS projects to see how they behave. I was expecting to find a changelog of any format in the root directory of the project. I was almost never the case.

The Linux Kernel

Linux kernel does not have a changelog file in the root directory. There’s a Documentation folder, with a “Changes” file inside. It’s a documentation file, not a changelog strictly speaking, so it does not track changes in specific versions. Other changelog files are present in specific component folders.

Ruby on Rails

There’s not a changelog in the main folder, but several “” files inside each component folder. It’s a markdown file, with no version numbers and no dates.


No changelog at all.


No changelog.

An History.txt file with version numbers and no dates.

What about libraries?

Since I’m a Ruby guy, let’s have a look at some popular Rubygem:


Sidekiq has a file which has version number and a bullet list item for each change. No dates. It has code examples.


Has a Changelog.rdoc file with version and dates as headings, and bullet lists for features and bugfixes.


Has a
with version numbers and changes in a bullet list, just like Sidekiq.

Do I need a changelog?

There are times in which it doesn’t make sense to write a changelog. A small team working on a private project may not find any benefit
in writing a changelog. But if you work on a opensource project, especially on a library, you should definitely have one.

Dependencies on a project tend to become difficult to manage also because you don’t have a clear view of what changed.
Semantic versioning helps us a lot to quickly understand if it’s safe update a component or not.

Is it a bugfix? You should update ASAP. Is it a minor change? It’s safe to update and you should, because you want to stay close to the edge, right? Is it a major change? You should take the risk to update because the world is moving on. But it’s not always that easy.

Even if it’s a bugfix, I want to know what changed. And because every time I run a bundle outdated it’s always a long list showing up, I would have a quick summary.

Many times, even minor version changes are too easily applied. If a change is backward compatible you should still ask yourself if that change is something you would benefit of. Most of the times it’s a new feature introduced, which you may not need. Moreover a backward compatible change may introduce a bug.

Developers are often caught by a kind of up-to-date fever. I think part of it is due to the fact that it’s hard to have a clear view of your outofdateness.

Why a shared format is important

Going back to the initial thought which led to all this typing, a shared standard for changelogs would make it possible to have machines do something interesting with those files.

Bundler is one of the many dependency management which may benefit of a parsable format. Even an online service would be possible with little effort, to send you a weekly digest of the changelogs of your project’s dependencies.

Modern requirements for a changelog

Let’s have a look at what the specs for a modern changelog should be.

Version numbers

The most important thing a changelog should tell is the version number the changes refer to. Listing dates is much less important.


Having a look at what most projects are doing, I think any kind of text friendly markup language is a desirable feature. TXT is great, but the world is on the web and with a tiny markdown effort we can make web pages much more pleasant to read.

Markdown would also make possible to have automated syntax highlight for code samples and all the stuff we already know and appreciate. This should be optional though.

List of changes

Those should be listed from the most recent to the oldest. Each item should start with an item list marker (this may change depending on the makrup language adopted). Each item should be composed of at least a short summary.

Optionally, a change may have a longer description of the change, links, code example or any other thing a developer may want to add.


This is far less important than the previous three. I would hardly make a decision on whether a change is something I would apply or not based on who did that. That should be something you dig into the CVS when interested in. This one too should be optional.


I don’t know what to do with dates in a changelog. Again, they may have a meaning, but what I’m trying to come up with is a format which includes the basic facts a developer may need to decide if a change is worth upgrading or not. I don’t think anybody would take this decision based on a date. I consider the date to be optional too.

It’s worth nothing that if you like to put dates in a changelog, you may want to put the dates on change items and on version numbers. If you are a date maniac, you’d probably want to know how much time passed since the last change and the version release.


I think something like this would go as a standard changelog:

Which in markdown translates into this:




I think that having a shared format for changelogs is a bit of an utopia. But the possibility to have your changes show up nicely into one of the many dependency management softwares out there may be a huge stimulus for developers to agree on something.

What do you think?