Tag: documentation

Write the Docs: Ann Goliak – Helping the help pages

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ann works at 37signals on their support team. She talked about the launch of their new customer-facing support documentation.

The help pages were designed to be self-service. The old layout organized things as a list of questions. There wasn’t any sense of hierarchy or importance to things. The first question listed was, “What are the ‘Your global feeds’ on the Dashboard?” So why was that the first question? Well, because it’d been clicked on the most times. It’s the most popular question because it’s the most popular question. Ultimately these question lists just didn’t make sense.

Mig Reyes and Ann worked on a new help site that, in short, would be browsable, intuitive, and reflect the actions and activity of users. It also needed to serve as a starting point for people to get up to speed with Basecamp. From the support team perspective they needed a landing page for resources, a flexible tool that provided for multiple ways of using a feature, and they needed a software platform that allowed for all of this to work well.

The new site combined a new CMS along with new help guides targeted toward specific use cases. It’s a cool combination of standalone guides with answers to common questions.

In writing the new docs the support team sought to be more concise in their writing. They also aimed for each doc to tell you specifically what you could do with the feature. Each answer in their FAQ section answers the question briefly while also providing a deep link in to documentation.

To edit content on the support site the team had to create a local development environment. They installed Xcode, Homebrew, git, generate an SSH key for Github, install Ruby, rebenv, bundler, pow, Jekyll, pull the repo from Github, and then bundle install and rake setup. It was that “simple.” They use this setup to stage changes to docs as well. By using git’s branches ability they can prep content before a release. Those same branches also allow for experimentation with the documentation.

In the first 2 months their support site saw 2,000 hits a day and support tickets were down 5% from with the previous system.

Write the Docs: Ashleigh Rentz – The technical challenges of serving docs at Google’s scale

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ashleigh started at Google in 2004 as a data center hardware technician. In 2010 she got involved with a team of tech writers working on API documentation. The story she told was of how Google’s CMS came to be.

Google now has so many developer products it fills a periodic table. Literally. They made one.

Scaling problems can show up so gradually, you barely notice them until you’re already in big trouble. This happened for Google with their CMS. What worked in 2005 was horribly broken by 2010.

In 2005 Google had just hired Chris DiBona as the head of Open Source at Google. He started by focusing on getting Google to contribute more to Open Source projects. They created code.google.com as a place for them to share code. When they launched this it was an introductory place to put some code. They started with documentation around their 10 APIs at the time. It’s build using EZT, or EaZy Templating. It’s a simple markup language you can use to define build objects in your documentation.

Google’s code site was optimized for small files, about 256K, and cached things in memory. This grew from Google’s issues scaling the hardware impacts of their consumption at the time. It was a time when a gigabyte of storage was still a lot.

In 2006 Google launched Project Hosting. In the days before Github this mean that they had a place to host and share open code projects.

By 2010 the builds for code.google.com started running in to serious issues. New docs weren’t going live and they were hitting consistent errors. Files were taking almost 45 minutes to build. This meant that a tech writer working on a document had to give themselves a 45 minute lead time. A new project document set to launch at 2pm had to be filed at 1pm. Any typo or issue in the doc submitted meant another 45 minute delay. All of that was compounded by the fact that each build would fail with a typo in any new doc. One doc with an issue caused problems with new docs across all services.

There were other failures, too. Outside of writer mistakes they hit issues with disk I/O. This caused them to push the build cron jobs back to once every 2 hours. The fun part of that was that to pull any technical documentation down from the web also took 2 hours. Picture how awesome that is when you accidentally publish something. This 2 hour turn around time just didn’t work for how Google wanted to publish technical content.

They faced a choice between a band-aid fix and pushing the reset button on their CMS. They decided to develop a CMS that was actually meant for developer documentation. A team of people worked on this new site and the new CMS. The product of this was developers.google.com.

Google’s new developer site as built differently. Gone were the days of having to do everything manually. Since Google now had App Engine they were able to leverage this as the platform from which they could build docs. Using Django nonrel so that they could work with the Django framework with the non-relational database structure of App Engine.

By moving the CMS away from EZT they avoided relying upon a site-wide build. Now they could build only what the writer asks for, when the writer asks for it. Syntax errors now returned in 60 seconds, not 60 minutes. And, your syntax errors don’t affect the system, just you. One downside to no site-wide builds is that when changes (for example, with pricing) happen outside the document tree Google has to manually rebuild the document to reflect the new pricing structure.

In late-2011 they started the process of migrating over to the new site. With 80,000 documents that’s a slow process. The problem is that it split their code documentation across 2 sites. It was a short-term issue that would eventually be fixed. The goal was to complete the move by May 2012 and all went smoothly.

Write the Docs: James Socol – UX and IA at Mozilla Support, and Helping 7.2 Million More People

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

James started things off after lunch. He works on support.mozilla.org and the Mozilla Developer Network.

James started talking with a short history of SUMO, which Michael talked a bit about yesterday. Through a series of redesigns they got to the design they now have. In that process they worked through solving the problems of earlier iterations.

When they tried solving this problem they lacked a good bit of data. What they did have, though, showed very low “helpful” scores on articles. They also had high exits from searches, re-searches, and high bounce rates.

One of the first things they did was have someone dedicated to the web side of support. They started with an heuristic evaluation and worked with a user experience expert on improving things. One thing they discovered in this was that if people got to the right article the helpfulness scores were very high. Outside of that, though, the scores tanked. They knew they had an information architecture problem.

They set out to analyze the current information architecture of the site. The first step was the manually look through the docs. They looked at what articles they had, where they were linked from, and the taxonomy that existed. To help with this they did a card sort, a means to guide users to generate a category tree.

With the map they had from the card sort they used Treejack and limited the user testing to just displaying the title of docs. The goal for users was then to say, “This is where I will find my answer.” With their current architecture of the time the success rates were as low as 1%. That’s bad. With that, though, they now had data. They had something they could work with and could optimize. What they found was interesting. Some articles were missing, some were badly named, and some had other issues.

Their user experience people had a few ideas. They proposed and tested a few solutions. This took their success rates in user tests up to highs of 92%. One task specifically went from a 1% success rate all the way up to 86%. With Treejack they were able to run all these tests by focusing just on the titles. It meant they could test quickly without having to rearrange or rewrite all of their docs.

At the end of things 10% more people were coming to the site and finding their answer. They tracked this by graphing the rate of “helpful” scores on documents. That 10% meant 7.25 million more people a year found the solution.

Write the Docs: Heidi Waterhouse – Search-first documentation: tags and keywords for frustrated users

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Heidi wrapped up talks just before lunch. She talked about search-first documentation and how search-first writing serves users. As she put it, lots of our users are coming to documentation angry. They have problems they need solved and can’t find the answers.

Heidi started in the mid-90s with table of contents focused documents. These are wonderful in that they’re orderly, linear, searchable, and often indexed. But, they are also rigid, linear, over-described, and leave users out of the process. They ignore the fact that a good document doesn’t just include everything in the world.

Mid-career she moved on to task-based documents. These are great in how they take in to account the goals of users. While they’re more modular they can be too chunky and hard to discover. There’s no path through these documents. You can hop from one task to another but the overall picture and flow becomes difficult. Task-based documents are also rigid about the information type they require.

More recently Heidi’s seen guerrilla documentation appearing. This is largely user-created, relevant to real needs, and may surprise you. The downside is that the documents can get stale, they’re uncontrolled, and they require leaving the ecosystem of the product. The signal to noise ratio can also be hard to determine.

Heidi’s proposal is that we take the best aspects of each of these models and create a new model. The model of search-first documentation. We’ll end up with something responsive to user needs. It will be documentation that is self-triaging and is born searchable. Ideally the terms used in this type of documentation comes from your users. It’s not important what you call a feature, it’s important what users call it and how they’ll search for it. For example, “blue screen of death” appears nowhere in Microsoft’s documentation but we all know what it means.

To make this type of documentation happen you first need to gather data. Using tech support, user communities, and Stack Overflow you can get all the info you need. Second, you’ll have to write the docs and keep publishing all the time. Writing pithy docs will help you focus on responding to a specific question. Plenty of these questions won’t be answered by a simple task.

Write the Docs: Tim Daly – Literate Programming in the Large

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Tim’s talk required some previous knowledge of Donald Knuth. If you don’t know who that is Wikipedia has a good summary. Tim’s background is largely with the Axiom algebra system.

Tim talked about how back in the 1970s you programmed in very small files. Nothing could be more than 4K so you ended up with these trees of tiny bits of code and relied upon build systems to put it all together.

IBM’s standard for documentation requires things to be written at the 8th grade level. This is understandably quite tough when you’re documenting complex algorithms.

Tim knows what code he wrote years ago does. He knows that if he takes it out things will break. The problem is he doesn’t know why he wrote it in the first place. This was tough when he faced the task of working with 1.2 million lines of uncommented code. The 8th-grade level documentation didn’t really help. In the early projects he worked on they didn’t write down the “Why” of code. Turns out that’s really, really important.

Tim sought a technology that would let him capture the “Why” of code. This, essentially, is literate programming and stems from Donald Knuth, the writer of LaTeX, METAFONT, and many more pieces of code. A literate program should pass the Hawaii Test. This is where you take the program, print it in book form, give it to a programmer for a couple weeks, send that person to Hawaii. When they’re back they should be able to work on and modify the original code as well as the original programmer. If you have that, you have a literate program.

The book form of a literate program includes all the necessary source code to build a system along with all the documentation and narration required to understand that system.

Tim argued that programming teams need an Editor in Chief. No one should be able to check in code without this EIC affirming that the code has an explanation along with it. The EIC gets between developers and the repository and says, “We’re writing a book about this code. You can’t check in code without the code and the story about what the code does matching.” When you have the explanation along with code you can compare a programmer’s stated goal with the reality of what the code does.

Companies depend upon pieces of software that former employees created. If you don’t understand that code you end up rewriting it. By ensuring our programs are literate programs we provide for stronger future proof code.

Write the Docs: James Tauber – Versioned Literate Programming for Tutorials

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

James started things off after the break talking about a combination of ideas around literate programming and version control. He pitched it as a socratic talk that would pose more questions than answers.

James comes from a background of more than 20 years involvement in Open Source projects. He’s the CEO and founder of Eldarion which builds websites in Python and Django.

In June 2003 James posted to the Python mailing list about how feature structures could be implemented in Python. He worked up an example that somewhat like narrative programming. A method in which you explain to humans what the code is doing while you are writing the code.

Much of the talk went over my head so these notes aren’t the greatest. The gist seems to be that literate programming is not a means of redoing how we do documentation but, rather, a way we rethink programming. Writing the code and describing the code ought to be part of the same process.

Write the Docs: Noirin Plunkett – Text Lacks Empathy

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Noirin opened the second day at Write the Docs by stating that we are basically hairless monkeys. We’re inherently emotional people.

Ein + fühlung: The German root of empathy. Our ability to communicate, and to do so with empathy, is what helps us create these social connections. Facial expressions, body language, and more help us cue these reactions and connections.

Text, though, can remove emotion from our communication. We lack the facial expressions and more subtle indicators that help us in person. We have a tendency to fill emotional voids with negative emotions. This is particularly true in high stress situations.

The rapidity with which we can compose digital text is not ideal if we’re trying to solve complex problems. What works for in-person conversation does not work as well in a text format.

A lot of the time we don’t write. In email or in documents we aren’t invested in we, essentially, speak with our fingers. For people we have a connection with, that’s fine. When writing documentation we don’t have that same relationship with our audience. We don’t know their background, we don’t know why they came to the document, and we don’t always remember that communication is more than just transmission.

Learning social rules is an ongoing process. It’s exhausting and difficult for many. Noirin refers to it as running in emulation. It’s like booting up a virtual machine to try and understand how something works in a difference context.

Oblique Strategies from Brian Eno was mentioned. It’s a way to help with creative problem solving. So when you’re stuck on a problem you can draw a card and apply that to the situation you’re facing. They’re not so much advice as a means to remind you how to think about problems.

Noirin discussed a few strategies for making our docs more emotionally engaging. First, we have to understand expectations. This applies to many aspects of our communication. The expectations our users have when reading documentation, when a boss reads our email, and more are important to how our text is received.

Most people assume their incoming communication has tact attached to it. We don’t assume communication is rude and abrasive. When it is, it surprises us. To solve this Noirin recommends we all attach a little tact to our output.

The next strategy Noirin covered is to argue that zero is not negative. If we can try to recognize when we’re projecting negative emotions in to a space that has no emotion we’re assuming. If it’s unclear what the emotional context is, ask. That’s the only way you can be clear about the intent of a message.

If you transmitted a message and a different message is received the onus is on you. You have to make sure your audience understands what they’re reading. Communication is a two way medium and if something is misunderstood it’s not entirely the reader’s fault. The reader is the only thing that matters with documentation. When in doubt we should rephrase something. If you have to ask whether a sentence is grammatically correct, it doesn’t matter. Rewrite it.

The readers of your documentation don’t know how you feel. Our readers can’t see us, they can’t hear us, they don’t know if we’re having a good day or a bad day. Stating our emotions is a good way to get conversations back on track. If a conversation over text isn’t going well, state your emotions.

Noirin recommends moving through communication flow like this: email, IM or IRC, voice, video, real life. Those are in increasing order of fidelity. If email doesn’t work, move to IM. If that doesn’t work, move to voice. As she put it, “the fastest way to pass a Turing test is to pick up the phone.”

Perception is reality. If someone feels attacked, for example, they will shut down. That inherently makes their feeling reality. Reality is not what you’re trying to communicate, it’s what they’re feeling.

Noirin’s last point is that if it doesn’t matter, do it their way. Don’t be a stubborn fool just because you want it done your way.

Write the Docs: Teresa Talbot – Technically Communicating Internationally

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Teresa continued afternoon sessions by talking about the why and how of working abroad. She’s been a technical writer for about 20 years and spent 7 of those years working outside of the United States.

There’s a strong demand for technical writers outside of the US. This is largely because English is the most-spoken second language in the world. Lots of tech companies abroad wanted English-speaking technical communicators. Teresa has even worked for companies in the UK because they sought a US-specific technical translator.

The first route to working abroad is to have a company sponsor. This is what allowed Teresa to work and live in Holland. While this gives you certain benefits like state-run healthcare and whatnot it also more directly submits you to the more unique aspects of that country’s tax and employment laws.

Another route is to work as a contracting American for an international company. Teresa did this for a translation company working in Japan. Since Teresa was billing from a US social security number she didn’t need a work permit which made things more convenient.

You can also start a company abroad. Teresa did this in Bulgaria and while she had a business license she never did get a residency permit.

Overall Teresa’s talk dove in to lots of the nitty gritty in working abroad. Not the best content for notes but I noted what I could. 🙂

Write the Docs: Nisha George & Elaine Tsai – Translating Customer Interactions to Documentation

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Nisha and Elaine work at Twilio where a large portion of documentation is owned by the support team.

Why do people write to support? They’ll ask questions like “Can I do…”, “How do I…” as well as report things that are broken. A customer company thinks of documentation as something that is helpful to your customers as well as helpful to the company. From the company perspective docs provide a consistent answer that’s inline with what the company wants to convey. Documentation is, essentially, a way to maintain a healthy relationship with customers and maintain expectations.

Support can own docs that compliment engineering docs. You don’t need to spend a lot of time gathering topics for docs. Every ticket that comes in to support is an opportunity to document the answer to that. Documentation, then, can decrease the increase in growth your support team sees in tickets.

A primary goal for your documentation should be making answers easy to find. Having questions be phrased as the customers ask them is just one way to do this. The overall structure of a knowledge base is also important. It has to be logical. Creating those buckets for common questions and tasks with your application can help guide a user from one piece to the next. Finally, your docs have to be searchable. Not having that is a deal-breaker.

Your customer support team takes care of new features, products, and bugs while the documentation takes care of known issues, features, products, and workflows.

How will you know that you’ve created a successful documentation structure? First, customers will become doers. They’ll trust that they can become self-sufficient with your documentation.

Write the Docs: Kevin Hale – Getting Developers and Engineers to Write the Docs

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Kevin was the next talk in the afternoon. Also, check out my notes from UserConf for notes from his talk there. He talked about Wufoo.

The ease of use Wufoo has means that, well, just about anyone can use it. Every person on their original team of 10 people wrote documentation. The secret, as Kevin puts it, was that everyone was working in customer support to some extent. They sent about 800 support emails a week to a user base of about 500,000.

Wufoo sought to create software that people had a relationship with. They were fanatical about creating meaningful relationships. They approached new users as if they were dating them and existing users as if they were married to them.

When it comes to new users, first impressions matter. The homepage, landing pages, plans/pricing, login, and support are the typical first impressions. Kevin prefers to focus on things like the first email, the login link, the first support interaction, and other specific pieces of the customer experience.

One way this looks in practice is how Chocolat allows you to keep using all the features after the trial period. Only thing is they force you to code in Comic Sans. Little touches like that matter.

Kevin also mentioned the site Little Big Details which collects lots of these kind of touches. WordPress is even in there. 🙂

Kevin mapped common marriage issues (money, kids, sex, time, others) to product issues (cost, users’ clients, performance, roadmap, others). As he put it, “divorce is like churn in a marriage.”

Kevin and the founders of Wufoo sought to create a support-driven development process. The way you make this work is simple. You just make everyone do customer support. The creators become the supporters and, thus, can’t ignore the things that cause users grief. This helped Wufoo scale their customer growth without causing an exponential hit on their support volume.

They learned a few lessons from this:

  • Support-responsible developers give the best customer support.
  • Contextual documentation is key. For example, clicking the “Help” tab takes you to the portion of docs for that feature.
  • Engineers who do support run experiments. Wufoo did this by having an emotional state drop down to their contact form.
  • Support-responsible developers actually create better software.
  • Support-responsible developers respect the people who do support full-time, every day. Their first full-time support person was revered because everyone understand what that job was like.

They spent 30% of their time in internal tools. As Kevin put it, some of the best software they created was stuff that no user interacted with. It’s wroth taking care of the stuff your employees work with every single day.

The prevent their user relationship from atrophying Wufoo included a “Since you’ve been gone” view in their application. Each time a user logged in they’d see a timeline of what features had recently been added. To be included in this list they required developers to have finished the documentation. So if a developer wanted their feature in front of every user, they wrote docs.

Kevin also posted his slides over on Speakerdeck.