Tag: Write the Docs

Session notes from Write the Docs, a conference in Portland, OR and Budapest, Hungary about crafting great documentation.

Write the Docs: Jessica Rose – Tone in Documentation

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Jessica comes from a humanities background and about a year ago started teaching herself to program. She works for Majestic SEO and, when they found she was learning to program, immediately set her to work on the product’s documentation.

When she started out she wasn’t really sure what she was doing. Tone is about feelings and Jessica didn’t really know how to add that in to documentation. She found that you can’t seek out concrete rules for tone, it’s more of an art than a science. The most important thing to remember, though, is to use tone as a reinforcement of your brand’s voice; just don’t make it too sleazy.

Tone shows your users who you are. It broadcasts what you wish users feel about you. It can also communicate who your intended audience is and what your expectations are for how they will, or might, interact with your product. Finally, it indicates the level of interactivity you’re willing to maintain with your clients or users.

Part of talking effectively to your audience is accurately assuming who your readers are. Setting those expectations from the outset involves looking at skill level, whether they’re an individual or organization, commercial or non-commercial. It’s so easy to get this part wrong. Misjudging your audience can limit that audience through a tone that’s at the wrong level. You can also exclude potential users through the use of culturally limited references. Overall setting expectations is about asking what tone you want users to carry through your product.

Tone is also about the level of creativity your users must take. For example, are you inviting them to try and break something? Or, are they needing to conform to a level of professionalism or specific niche.

We can also use tone to open up the vision or aspirations for our users. Tone can encourage users to set a broad, but not too broad, scope with vast creative potential. What gets messy is setting multiple levels of user expectations.

Before you set the tone for interaction you can run through a set of helpful questions. What kind of resources are you making available for user support? Are you building a community? What level of transparency are you aiming for? All of those things can help set the proper tone.

Jessica highlighted Buffer’s API documentation as a great example of tone. They set a very personal tone that helps direct users where to contact them. It’s lots of “you” and “we” throughout.

There are times, though, when you need to divorce from your voice. When your audience for documentation and the main product diverge. Or, when the branded tone suggests a level of support or interactivity you’re unable to support. You can also dilute your voice a bit in documentation. It doesn’t have to be an either/or situation.

Ultimately, when you’re writing you have to ask who your users are, what you expect them to be doing, and how much interaction you want to promise them.

Write the Docs: Jannis Leidel – Search and find. How we made MDN discoverable

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Jannis is a developer working at Mozilla. He’s currently working on the Mozilla Developer Network, a site that covers the web platform, desktop app, Android, and Firefox OS efforts at Mozilla. There are 5.5 writers and 6 developers, as well as 14,000 community contributors, working on MDN and the site gets around 2 million unique visitors per month. With 900 live code demos and 33,000 wiki documents which have 375,000 edits in total there is a lot of content to deal with.

They now use kuma, a django-based wiki that’s available on GitHub. In the past, though, MDN ran on DevEdge, an AOL creation for static page sites. After a couple iterations they ended up writing their own software for documentation. That project became kuma. In 2013 they designed the site to bring a responsive layout with content zones that places search front and center.

Screen Shot 2014-03-31 at 2.40.27 PM

The emphasis on search, though, requires a powerful engine behind it. MDN has moved from a custom Google-based search to rolling their own implementation. It’s a full-text, multilingual search engine that provides faceting, filters, and pagination. The filters cover topics, skills, and document types. They can be dynamically changed based on what the Mozilla team sees in usage. Growth in certain areas allows them to emphasize different areas. It keeps the documentation responsive to the community’s demands and interest.

Each search page is also available as JSON. The users of MDN are developers themselves and this gives them the ability to use the data in formats other than MDN’s main site.

Another piece of MDN are the documentation status pages. This allows them to show the thousands of community editors and contributors what to work on first. It shows which pages need tags, editorial reviews, or technical reviews.

Firefox also ships with command-line access to MDN as part of their developer tools. From within the browser you can use the search API to pull up developer documentation. Users don’t have to leave to another site to find answers. In the future they want to extend this to plugins for popular code editors.

Write the Docs: Thomas Parisot – README Driven Development

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Thomas started the sessions after lunch. He’s a JavaScript Engineer at the BBC. In 2011, after returning from a trip to Iceland, Thomas attended a conference in Paris where someone covered research illustrating how people learn more effectively from talking about something than reading about it. It prompted him to think about whether we should be talking about things more than relying upon static documentation.

He was working at a startup that grew from 1 to 5 and more projects. They needed a better way to document what they were doing. As he put it, so much time is put in to writing code that powers useful software. We want to share our work and findings. Documentation, in part, becomes how that end result is communicated to others.

At the startup they moved toward README-driven development for a few reasons. It’s the default file that GitHub displays, for one. A README is a simple and fast piece of content to write. It can also be written and then transformed in to any number of formats. The goal is to sum up what software does in a few sentences.

At any point you can use any number of tools to edit a README alongside the versioned code. You’re not pushing people to a separate website to learn about the software. Through the inclusion of examples and effective content you can help people right within your source files.

With intro information contained in a README you can tell at a glance the limits of your software. Setting shallow limits for your README helps ensure users can understand your software quickly.

Part of what’s key about README driven development is that it’s about ideas and new projects just as much as it is about existing work. You can lay the groundwork in your README before you write any code. It can create crystal clear guidelines for developers and allow you to gather feedback before shipping your code. By writing the README you are sketching your idea of what will be produced. Doing all that work ahead of time allows you to iterate faster.

This all helps in detecting complexity. As your README becomes more complicated it means you’re maybe trying to do too much within one project. It can be a helpful marker to when you need to break things in to their respective pieces.

Focusing on the README can even help in pull requests, bug repots, and other aspects of development. Being detailed there focuses on the intention of your code rather than the code in isolation. That can prompt discussion about whether the intention is ideal regardless of the code attached.

Ultimately focusing on the README first gives users a single entry point to your project. It emphasizes the correct things, makes dialogue easier, and gets you to top-quality code faster.

Write the Docs: Markus Zapke-Gründemann – Writing multi-language documentation using Sphinx

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Markus is an independent software developer and software trainer. Sphinx is a python-based tool for generating documentation. It will output in to a variety of formats including HTML, ePub, and plain text. His talk focused on the internationalization of text written in Sphinx.

Internationalization is the translation of text in to other languages without changing the underlying code. It leaves the markup and HTML structure of the text unchanged.

How internationalization works in Sphinx.

How internationalization works in Sphinx.

Sphinx has an introductory page about how internationalization works. It walks through examples of using gettext as well as the general process.

sphinx-intl is an extension that makes the translation process much easier. It lessens the command-line nature of the task and generates binary files for your translations.

Transifex is a commercial solution for speeding up your translation process. It’s a modern webapp that makes collaborating on translations easier. They also have a free option for open source projects.

Write the Docs: Idan Gazit – Advanced Web Typography

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Idan is a designer/developer at Heroku. He’s also a core designer for django. He started his talk clarifying the difference between typography and typefaces. Typefaces are just a subset of the broader typography field, which is the art and science of presenting textual information.

There’s micro (typeface, kerning) and macro (measure, leading, flow of type on a page) typography.

We’ve been designing books for centuries and there are all these built-in guidelines for typography in printed form. It’s all about control there. Since printed content is static the designer can control the consumption experience.

The web, though, presents us with a anarchic lack of control. We don’t even know where or with what screen someone is reading our words. And, many times, in the most constrained setting is when the reader needs your text the most. As documentarians this is where you have to deliver, this is why you have to care about typography and the web.

Typography on the web is still in its infancy compared to printed type. Sometimes the new, cutting-edge tools are hard to use. But they do pay dividends when used properly.

Type size is one of the primary levers you have for controlling text displays. Browser’s set the default font size to 16px. Books, though, set a default of around 12px. The difference here is that we hold books much closer when reading. If your content is going to end up on a screen the larger size improves its legibility. When setting type size proportional sizing, ems or percentages, is ideal. An em is a box that is roughly the size of an uppercase M; that’s the largest character a given font will need to display. An em is a unit of measurement that is proportional to the typeface used. This lets you change your typeface later on and not ruin the proportions of your design.

CSS3 introduces the rem unit, which is a root em and makes sizing easy. Everything goes back to the root so you’re not tracking relative sizes down the HTML stack.

Browsers apply font families character-by-character. Fall back families are used when the primary choice doesn’t include a given character. Idan also dove in to how font-face interacts with a browser’s display of text.

The dreaded FOUT can also affect our docs. Readers will see the default font applied just before a custom font resource loads. When the new font resource does kick in it also affects the layout of our page. This is because the browser adjusts the font metrics used. Adobe Blank is a tool that can help with this. It’s a font that purports to contain all characters. This ensures no text displays until your custom font arrives. Additionally there’s a spec in progress that will allow you to hook in to font loading actions.

Your journey toward really good type doesn’t end there, though. You can dive further down the rabbit hole in to type rendering. Rasterizes turn outlines in to pixels. They transform our fonts from smooth vectors to on and off pixels. Windows, for example, uses three different rasterizes. There’s even a handy guide to figuring out which will be used under what conditions. Mac OS X, though, uses just one.

The bottom line, though, is to test your typeface. You can’t rely upon just one operating system or just one browser. Everything can impact how your text displays.

Idan also published his slides on Speaker Deck. You should check them out for more details (and for the beautiful design).

Write the Docs: Shwetank Dixit – Challenges and approaches taken with the Opera Extension Docs

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Shwetank is a web evangelist at Opera. He’s worked there for 6 years or so now and is the main author of the Opera Extensions Docs.

A year ago Opera moved their platform to Chromium. Their extensions platform switched to accomodate chrome.* APIs. They had to either re-document everything from the ground up or take Google’s existing docs and improve them. Both were difficult options. On the one hand it would be hard to re-create something. On the other it’d be difficult to improve upon an already great product: Google’s Docs. Ultimately they took a look at Google’s Docs to see what they could improve.

What needed to be communicated in these docs was Opera’s architecture, extension APIs, and AddOns Store. The aim was to be easy to understand and to explain the most common use cases.

As an extensions developer himself Shwetank could empathize with what the users of this documentation would need. He ran in to many hurdles in trying to understand the platform and documentaed the solutions he found.

They set the scope of improvement upon Google’s Docs to be:

  • Unified architecture explanation
  • Easy to follow tutorials
  • Explaining common use cases
  • Simple sample extensions

The architecture needed to be explained on one page. The tutorials for essential APIs and functions needed to be accessible to new developers. Docs needed to cover a breadth of common uses, such as closing and opening tabs.

The core of this improvement process relied on making sample extensions for common use cases. By documenting the issues Shwetank came across he was then able to write an article which improved upon Google’s core docs.

On the technical side of things they built the site in Jekyll while using Markdown for the tutorials and an API Docs in plain HTML. It’s available on GitHub.

Sample code must be as simple as possible with as few lines as possible. It’s a starting point to build from, not an exhaustive source of implementation. 20 pieces of sample code that each do one thing is better than one monolithic code example that does 20 things.

One thing that worked well for Opera after publishing the new docs was using multiple channels for feedback. They heard from people on Twitter, email, and Stack Overflow. Much of the feedback was about abstract scenarios.

Still to come in the docs is an improved navigation, boilerplate extension generator, translations, and adjusting the tone.

Write the Docs: Kelly O’Brien – Engage or Die: Four Techniques for Writing Indispensible Docs

I’m at Write the Docs today in Budapest and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Kelly runs O’Brien Editorial, a technical writing business that works on documentation for small Drupal shops and large enterprises. She started as a journalism major in college and wrote about everything from food culture to trends in eco-construction materials. Through that experience she learned about how to target writing to your audience. In other words, how to engage your readers.

Engagement is about holding your readers’ attention but also means earning the trust of your readers. With documentation you want readers to rely upon your documentation for help. That starts with trust.

Readers must trust that you understand where they’re coming from. That you sympathize with their frustrations. That you know what they need to accomplish. That you’ll help them solve their problems.

If you don’t do those 4 things readers will ignore your docs. Kelly calls this Doc Death. There are 4 kinds.

The first is Death by Apathy. Think of readers as teenagers; if they detect that they have ceased to be the center of your universe they will tune you out. To fight this you need to put your readers first. Prevent apathy with empathy. The first step is to recognize that the things that are most important to your readers are not necessarily the same as the things that are most important to you. To learn what’s important you can ask what they care about, what they struggle with, and what they need from you.

Death by Alienation is the second form of Doc Death. Readers are a sensitive bunch. If they ever feel that you’re not on their site they will put your docs down and not touch them again. Your voice is one of your most powerful tools to combat this. Use it wisely. Tiny adjustments can make a huge difference to your readers. In your docs tone of voice is about formality. It’s a spectrum between academic and relaxed. The goal is to find a place somewhere in the middle. To do that you should take 3 things in to account: the company culture, the purpose of your docs, and the tech savviness of your readers. Whatever tone of voice you choose, be deliberate about it and employ it consistently.

The third type of Doc Death is Death by Impatience. The last thing your readers want to do is to hunt for answers. If it takes your readers too long to find what they need, your docs will be ignored. Organizing your content helps fight this. You should lead with the problem that the document solves. It makes it clear to your readers what they’ll learn by reading. It helps if you ask, “If it read this, what’s in it for me?” The secret to reader engagement is WIIFM: What’s In It For Me. You can make this clear to your readers by simply telling them. This should happen later in the writing process. The first step is to get everything on the page; then you can go back and make the WIIFM clear.

The last form of Doc Death is Death by Disorientation. Readers have relatively short memories. If your reader every wonders “Why are we talking about this?” your docs are in trouble. The solution is to use powerful pointers. Using pointer sentences helps your readers orient themselves. Pointer sentences recap, state, or foreshadow the information you’re presenting. They create a contextual structure for what readers are looking at. They also allow you to show that your readers’ needs come first. They remind your readers that they’re not alone. Finally, they communicate the WIIFM to your readers.

Write the Docs: Adam DuVander – Docs as Marketing: Make Your API Irresistible

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

There are close to 9,000 APIs in the ProgrammableWeb directory. Just a while ago this was only a couple thousand. Even companies like Sears, USA Today and others have APIs.

Adam’s most popular tweet from last year was, “All your API documentation should get the same treatment and design as your marketing materials.” That was a quote from John Sheehan.

Adam’s talk was about the Three Cs: clarity, cost, and community.

By clarity of API docs Adam means that developers can actually find their way to the documentation portal. Not all companies have their API docs even remotely accessible. Clarity also means having a complete API reference available in a form that lets developers explore. A clear API also requires sample code. It’s best when it’s in a couple popular languages and can be plugged right in to a project. Decreasing the time to “Hello world” is key.

The cost of an API is also relevant. Developers need to know how much (if any) it costs to use a service and its API. Rate limits are another type of cost that needs to be clearly communicated. Google does this very well with their developer console. You can see usage and request higher limits right inline. Developers also need to find any API-specific terms of service.

For the community of an API Adam talked about needing a place where developers can discuss the API. A forum is a good start but a strong presence where your developers already are (e.g. StackOverflow) is better. Highlighting current work based on the API and focusing on changes coming to that API is another great way to build community. You also have to put a face to your API. Developers need to know who within the company they are or should be talking to.

Write the Docs: Ann Goliak – Helping the help pages

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ann works at 37signals on their support team. She talked about the launch of their new customer-facing support documentation.

The help pages were designed to be self-service. The old layout organized things as a list of questions. There wasn’t any sense of hierarchy or importance to things. The first question listed was, “What are the ‘Your global feeds’ on the Dashboard?” So why was that the first question? Well, because it’d been clicked on the most times. It’s the most popular question because it’s the most popular question. Ultimately these question lists just didn’t make sense.

Mig Reyes and Ann worked on a new help site that, in short, would be browsable, intuitive, and reflect the actions and activity of users. It also needed to serve as a starting point for people to get up to speed with Basecamp. From the support team perspective they needed a landing page for resources, a flexible tool that provided for multiple ways of using a feature, and they needed a software platform that allowed for all of this to work well.

The new site combined a new CMS along with new help guides targeted toward specific use cases. It’s a cool combination of standalone guides with answers to common questions.

In writing the new docs the support team sought to be more concise in their writing. They also aimed for each doc to tell you specifically what you could do with the feature. Each answer in their FAQ section answers the question briefly while also providing a deep link in to documentation.

To edit content on the support site the team had to create a local development environment. They installed Xcode, Homebrew, git, generate an SSH key for Github, install Ruby, rebenv, bundler, pow, Jekyll, pull the repo from Github, and then bundle install and rake setup. It was that “simple.” They use this setup to stage changes to docs as well. By using git’s branches ability they can prep content before a release. Those same branches also allow for experimentation with the documentation.

In the first 2 months their support site saw 2,000 hits a day and support tickets were down 5% from with the previous system.

Write the Docs: Ashleigh Rentz – The technical challenges of serving docs at Google’s scale

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ashleigh started at Google in 2004 as a data center hardware technician. In 2010 she got involved with a team of tech writers working on API documentation. The story she told was of how Google’s CMS came to be.

Google now has so many developer products it fills a periodic table. Literally. They made one.

Scaling problems can show up so gradually, you barely notice them until you’re already in big trouble. This happened for Google with their CMS. What worked in 2005 was horribly broken by 2010.

In 2005 Google had just hired Chris DiBona as the head of Open Source at Google. He started by focusing on getting Google to contribute more to Open Source projects. They created code.google.com as a place for them to share code. When they launched this it was an introductory place to put some code. They started with documentation around their 10 APIs at the time. It’s build using EZT, or EaZy Templating. It’s a simple markup language you can use to define build objects in your documentation.

Google’s code site was optimized for small files, about 256K, and cached things in memory. This grew from Google’s issues scaling the hardware impacts of their consumption at the time. It was a time when a gigabyte of storage was still a lot.

In 2006 Google launched Project Hosting. In the days before Github this mean that they had a place to host and share open code projects.

By 2010 the builds for code.google.com started running in to serious issues. New docs weren’t going live and they were hitting consistent errors. Files were taking almost 45 minutes to build. This meant that a tech writer working on a document had to give themselves a 45 minute lead time. A new project document set to launch at 2pm had to be filed at 1pm. Any typo or issue in the doc submitted meant another 45 minute delay. All of that was compounded by the fact that each build would fail with a typo in any new doc. One doc with an issue caused problems with new docs across all services.

There were other failures, too. Outside of writer mistakes they hit issues with disk I/O. This caused them to push the build cron jobs back to once every 2 hours. The fun part of that was that to pull any technical documentation down from the web also took 2 hours. Picture how awesome that is when you accidentally publish something. This 2 hour turn around time just didn’t work for how Google wanted to publish technical content.

They faced a choice between a band-aid fix and pushing the reset button on their CMS. They decided to develop a CMS that was actually meant for developer documentation. A team of people worked on this new site and the new CMS. The product of this was developers.google.com.

Google’s new developer site as built differently. Gone were the days of having to do everything manually. Since Google now had App Engine they were able to leverage this as the platform from which they could build docs. Using Django nonrel so that they could work with the Django framework with the non-relational database structure of App Engine.

By moving the CMS away from EZT they avoided relying upon a site-wide build. Now they could build only what the writer asks for, when the writer asks for it. Syntax errors now returned in 60 seconds, not 60 minutes. And, your syntax errors don’t affect the system, just you. One downside to no site-wide builds is that when changes (for example, with pricing) happen outside the document tree Google has to manually rebuild the document to reflect the new pricing structure.

In late-2011 they started the process of migrating over to the new site. With 80,000 documents that’s a slow process. The problem is that it split their code documentation across 2 sites. It was a short-term issue that would eventually be fixed. The goal was to complete the move by May 2012 and all went smoothly.