Tag: notes

Write the Docs: Ann Goliak – Helping the help pages

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ann works at 37signals on their support team. She talked about the launch of their new customer-facing support documentation.

The help pages were designed to be self-service. The old layout organized things as a list of questions. There wasn’t any sense of hierarchy or importance to things. The first question listed was, “What are the ‘Your global feeds’ on the Dashboard?” So why was that the first question? Well, because it’d been clicked on the most times. It’s the most popular question because it’s the most popular question. Ultimately these question lists just didn’t make sense.

Mig Reyes and Ann worked on a new help site that, in short, would be browsable, intuitive, and reflect the actions and activity of users. It also needed to serve as a starting point for people to get up to speed with Basecamp. From the support team perspective they needed a landing page for resources, a flexible tool that provided for multiple ways of using a feature, and they needed a software platform that allowed for all of this to work well.

The new site combined a new CMS along with new help guides targeted toward specific use cases. It’s a cool combination of standalone guides with answers to common questions.

In writing the new docs the support team sought to be more concise in their writing. They also aimed for each doc to tell you specifically what you could do with the feature. Each answer in their FAQ section answers the question briefly while also providing a deep link in to documentation.

To edit content on the support site the team had to create a local development environment. They installed Xcode, Homebrew, git, generate an SSH key for Github, install Ruby, rebenv, bundler, pow, Jekyll, pull the repo from Github, and then bundle install and rake setup. It was that “simple.” They use this setup to stage changes to docs as well. By using git’s branches ability they can prep content before a release. Those same branches also allow for experimentation with the documentation.

In the first 2 months their support site saw 2,000 hits a day and support tickets were down 5% from with the previous system.

Write the Docs: Ashleigh Rentz – The technical challenges of serving docs at Google’s scale

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Ashleigh started at Google in 2004 as a data center hardware technician. In 2010 she got involved with a team of tech writers working on API documentation. The story she told was of how Google’s CMS came to be.

Google now has so many developer products it fills a periodic table. Literally. They made one.

Scaling problems can show up so gradually, you barely notice them until you’re already in big trouble. This happened for Google with their CMS. What worked in 2005 was horribly broken by 2010.

In 2005 Google had just hired Chris DiBona as the head of Open Source at Google. He started by focusing on getting Google to contribute more to Open Source projects. They created code.google.com as a place for them to share code. When they launched this it was an introductory place to put some code. They started with documentation around their 10 APIs at the time. It’s build using EZT, or EaZy Templating. It’s a simple markup language you can use to define build objects in your documentation.

Google’s code site was optimized for small files, about 256K, and cached things in memory. This grew from Google’s issues scaling the hardware impacts of their consumption at the time. It was a time when a gigabyte of storage was still a lot.

In 2006 Google launched Project Hosting. In the days before Github this mean that they had a place to host and share open code projects.

By 2010 the builds for code.google.com started running in to serious issues. New docs weren’t going live and they were hitting consistent errors. Files were taking almost 45 minutes to build. This meant that a tech writer working on a document had to give themselves a 45 minute lead time. A new project document set to launch at 2pm had to be filed at 1pm. Any typo or issue in the doc submitted meant another 45 minute delay. All of that was compounded by the fact that each build would fail with a typo in any new doc. One doc with an issue caused problems with new docs across all services.

There were other failures, too. Outside of writer mistakes they hit issues with disk I/O. This caused them to push the build cron jobs back to once every 2 hours. The fun part of that was that to pull any technical documentation down from the web also took 2 hours. Picture how awesome that is when you accidentally publish something. This 2 hour turn around time just didn’t work for how Google wanted to publish technical content.

They faced a choice between a band-aid fix and pushing the reset button on their CMS. They decided to develop a CMS that was actually meant for developer documentation. A team of people worked on this new site and the new CMS. The product of this was developers.google.com.

Google’s new developer site as built differently. Gone were the days of having to do everything manually. Since Google now had App Engine they were able to leverage this as the platform from which they could build docs. Using Django nonrel so that they could work with the Django framework with the non-relational database structure of App Engine.

By moving the CMS away from EZT they avoided relying upon a site-wide build. Now they could build only what the writer asks for, when the writer asks for it. Syntax errors now returned in 60 seconds, not 60 minutes. And, your syntax errors don’t affect the system, just you. One downside to no site-wide builds is that when changes (for example, with pricing) happen outside the document tree Google has to manually rebuild the document to reflect the new pricing structure.

In late-2011 they started the process of migrating over to the new site. With 80,000 documents that’s a slow process. The problem is that it split their code documentation across 2 sites. It was a short-term issue that would eventually be fixed. The goal was to complete the move by May 2012 and all went smoothly.

Write the Docs: James Socol – UX and IA at Mozilla Support, and Helping 7.2 Million More People

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

James started things off after lunch. He works on support.mozilla.org and the Mozilla Developer Network.

James started talking with a short history of SUMO, which Michael talked a bit about yesterday. Through a series of redesigns they got to the design they now have. In that process they worked through solving the problems of earlier iterations.

When they tried solving this problem they lacked a good bit of data. What they did have, though, showed very low “helpful” scores on articles. They also had high exits from searches, re-searches, and high bounce rates.

One of the first things they did was have someone dedicated to the web side of support. They started with an heuristic evaluation and worked with a user experience expert on improving things. One thing they discovered in this was that if people got to the right article the helpfulness scores were very high. Outside of that, though, the scores tanked. They knew they had an information architecture problem.

They set out to analyze the current information architecture of the site. The first step was the manually look through the docs. They looked at what articles they had, where they were linked from, and the taxonomy that existed. To help with this they did a card sort, a means to guide users to generate a category tree.

With the map they had from the card sort they used Treejack and limited the user testing to just displaying the title of docs. The goal for users was then to say, “This is where I will find my answer.” With their current architecture of the time the success rates were as low as 1%. That’s bad. With that, though, they now had data. They had something they could work with and could optimize. What they found was interesting. Some articles were missing, some were badly named, and some had other issues.

Their user experience people had a few ideas. They proposed and tested a few solutions. This took their success rates in user tests up to highs of 92%. One task specifically went from a 1% success rate all the way up to 86%. With Treejack they were able to run all these tests by focusing just on the titles. It meant they could test quickly without having to rearrange or rewrite all of their docs.

At the end of things 10% more people were coming to the site and finding their answer. They tracked this by graphing the rate of “helpful” scores on documents. That 10% meant 7.25 million more people a year found the solution.

Write the Docs: Heidi Waterhouse – Search-first documentation: tags and keywords for frustrated users

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Heidi wrapped up talks just before lunch. She talked about search-first documentation and how search-first writing serves users. As she put it, lots of our users are coming to documentation angry. They have problems they need solved and can’t find the answers.

Heidi started in the mid-90s with table of contents focused documents. These are wonderful in that they’re orderly, linear, searchable, and often indexed. But, they are also rigid, linear, over-described, and leave users out of the process. They ignore the fact that a good document doesn’t just include everything in the world.

Mid-career she moved on to task-based documents. These are great in how they take in to account the goals of users. While they’re more modular they can be too chunky and hard to discover. There’s no path through these documents. You can hop from one task to another but the overall picture and flow becomes difficult. Task-based documents are also rigid about the information type they require.

More recently Heidi’s seen guerrilla documentation appearing. This is largely user-created, relevant to real needs, and may surprise you. The downside is that the documents can get stale, they’re uncontrolled, and they require leaving the ecosystem of the product. The signal to noise ratio can also be hard to determine.

Heidi’s proposal is that we take the best aspects of each of these models and create a new model. The model of search-first documentation. We’ll end up with something responsive to user needs. It will be documentation that is self-triaging and is born searchable. Ideally the terms used in this type of documentation comes from your users. It’s not important what you call a feature, it’s important what users call it and how they’ll search for it. For example, “blue screen of death” appears nowhere in Microsoft’s documentation but we all know what it means.

To make this type of documentation happen you first need to gather data. Using tech support, user communities, and Stack Overflow you can get all the info you need. Second, you’ll have to write the docs and keep publishing all the time. Writing pithy docs will help you focus on responding to a specific question. Plenty of these questions won’t be answered by a simple task.

Write the Docs: Jennifer Hartnett-Henderson – Sketchnotes: Communicate Complex Ideas Quickly

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Sketchnotes are a great way to communicate complex ideas very quickly. Mike Rohde defines sketchnotes as, “rich visual notes created from a mix of handwriting, drawings, hand-drawn typography, shapes and visual elements like arrows, boxes and lines.”

Jennifer’s been able to combine her art interests with her working career by perfecting how she does sketchnotes.

Sketchnotes work due to dual coding. If you combine the visual with the written it increases people’s ability to remember information. It’s important, though, to not think of sketchnotes as art. They’re not art and, instead, are a means of communication. They’re just notes. Sketchnotes are about combining shapes and lines in to a form that makes sense.

There are a few resources to help get up to speed with sketchnotes. Mike Rohde sells The Sketchnote Handbook. Eva-Lotta Lamm also publishes sketch notes from conferences all over. The tools you use aren’t as important as the practice. You can use digital or paper, it’s about what works best for you.

Write the Docs: Tim Daly – Literate Programming in the Large

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Tim’s talk required some previous knowledge of Donald Knuth. If you don’t know who that is Wikipedia has a good summary. Tim’s background is largely with the Axiom algebra system.

Tim talked about how back in the 1970s you programmed in very small files. Nothing could be more than 4K so you ended up with these trees of tiny bits of code and relied upon build systems to put it all together.

IBM’s standard for documentation requires things to be written at the 8th grade level. This is understandably quite tough when you’re documenting complex algorithms.

Tim knows what code he wrote years ago does. He knows that if he takes it out things will break. The problem is he doesn’t know why he wrote it in the first place. This was tough when he faced the task of working with 1.2 million lines of uncommented code. The 8th-grade level documentation didn’t really help. In the early projects he worked on they didn’t write down the “Why” of code. Turns out that’s really, really important.

Tim sought a technology that would let him capture the “Why” of code. This, essentially, is literate programming and stems from Donald Knuth, the writer of LaTeX, METAFONT, and many more pieces of code. A literate program should pass the Hawaii Test. This is where you take the program, print it in book form, give it to a programmer for a couple weeks, send that person to Hawaii. When they’re back they should be able to work on and modify the original code as well as the original programmer. If you have that, you have a literate program.

The book form of a literate program includes all the necessary source code to build a system along with all the documentation and narration required to understand that system.

Tim argued that programming teams need an Editor in Chief. No one should be able to check in code without this EIC affirming that the code has an explanation along with it. The EIC gets between developers and the repository and says, “We’re writing a book about this code. You can’t check in code without the code and the story about what the code does matching.” When you have the explanation along with code you can compare a programmer’s stated goal with the reality of what the code does.

Companies depend upon pieces of software that former employees created. If you don’t understand that code you end up rewriting it. By ensuring our programs are literate programs we provide for stronger future proof code.

Write the Docs: James Tauber – Versioned Literate Programming for Tutorials

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

James started things off after the break talking about a combination of ideas around literate programming and version control. He pitched it as a socratic talk that would pose more questions than answers.

James comes from a background of more than 20 years involvement in Open Source projects. He’s the CEO and founder of Eldarion which builds websites in Python and Django.

In June 2003 James posted to the Python mailing list about how feature structures could be implemented in Python. He worked up an example that somewhat like narrative programming. A method in which you explain to humans what the code is doing while you are writing the code.

Much of the talk went over my head so these notes aren’t the greatest. The gist seems to be that literate programming is not a means of redoing how we do documentation but, rather, a way we rethink programming. Writing the code and describing the code ought to be part of the same process.

Write the Docs: Daniya Kamran – Translating Science into Poetry

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Daniya started by stating that she is not strictly a documentarian. She’s a translator and interventionist. She turns crazy black hole-style concepts in to simplistic solutions. She focuses on questions like, “How do we turn around how people view nutrition?”

Documentation is an intervention. You’re creating an intervention from the perspective of the user. It changes the way they relate to a product and their train of thought when using the product.

The techniques and processes she presented are not a checklist. They’re strategies you can employ selectively to reach your goal. Her goal is to give you a poetic lens for your work. If you don’t like poetry, scientific documents, or want her to just show you how to write the docs then you’re not going to like this.

Poetry is very good at immortality. When you write you cannot write thinking that what you’re writing is never going to be seen again. If we assume documents are temporary it will come across in our writing. The user can tell. Poets write to transcend time. As programmers and as documentarians we don’t do this. Even if no one understands your work they should at least know that you wrote it. Write thinking that you’re a bad ass. Here’s Daniya’s guideline: Good writing will get replaced. Bad writing will get replaced immediately. Epic writing will be edited. Be epic.

Poetry is always about a dilemma. There’s something about to happen that you can’t quite figure out and need to solve. We assume that people know why they are reading the docs. There needs to be a context for what is being created. Docs, ultimately, answer the question, “What do I do?” In many, many docs this is not obvious. Your worst enemy is not a competing document; but a lack of initiative. Someone is going to read what you wrote and nothing will happen. In making dissonance obvious we create a sense of urgency, increase user autonomy, and provide a call to action.

Daniya says we should be biased. There is such a thing as a point of view. Scientists are very allergic to a point of view; they don’t like it at all. We associate objectivity with intelligence, even though that’s not always the case. The person reading your docs is coming to you because you’re the expert. They don’t want to do the thinking that you did. They want the answer. There’s a reason opinions flow and have an impact. A lot of it just has to do with assuming you are the expert and writing accordingly. Influence your readers. The goal is to be holistic. Encase your opinion in objective analyses as well as other opinions. Having the point of view provides the reader context. They can understand that you’re human.

Another thing scientists are allergic to is error. Poetry deals with it in a fascinating way. In some ways poetry is all about error; bad things happen all the time. As Daniya said, “Why is epic poetry still epic when everything is going wrong?” Poetry deals with things as part of a cyclical process. When you view things as part of a process it removes a large aspect of the negativity. Daniya also phrased it as, “a lack of error is very contrary to human ability.” An error is not a consequence, it is amendment to the process and a part of the process. It’s temporary and provisional and will, eventually, be edited and improved.

Poetry is very, very good at reiteration. What reiteration allows is for us to remember the purpose after every major turning point and complexity. People should not forget why they are reading what they are reading. Reiteration allows them to connect the current complexity back to the original purpose. Periodically bringing back context allows a reader to never lose sight of what you’re trying to do.

Metaphors are, in some ways, the most important point. All the evidence you’ll be drawing from to write your docs already exists. We merely rearrange bits of information in new and interesting ways. All we are doing is creating metaphors. They allow us to rearrange the patterns of our mind. We’re making remote associations between things we didn’t think were related at all. The reader fills in these patterns and associations and, thus, connects more deeply to your writing. Instead of pushing something out of the page the reader is pulling it out of themselves.

As documentarians we are adding to how people view communication. The way you ensure that your documentation is eternal is to make sure as many people as possible can read it, leverage it, and connect with it. The bottom line in all this is elegance. Can you make your documents elegant? If you can capture this in your words and in to your page then you have done everything you can.

Write the Docs: Noirin Plunkett – Text Lacks Empathy

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Noirin opened the second day at Write the Docs by stating that we are basically hairless monkeys. We’re inherently emotional people.

Ein + fühlung: The German root of empathy. Our ability to communicate, and to do so with empathy, is what helps us create these social connections. Facial expressions, body language, and more help us cue these reactions and connections.

Text, though, can remove emotion from our communication. We lack the facial expressions and more subtle indicators that help us in person. We have a tendency to fill emotional voids with negative emotions. This is particularly true in high stress situations.

The rapidity with which we can compose digital text is not ideal if we’re trying to solve complex problems. What works for in-person conversation does not work as well in a text format.

A lot of the time we don’t write. In email or in documents we aren’t invested in we, essentially, speak with our fingers. For people we have a connection with, that’s fine. When writing documentation we don’t have that same relationship with our audience. We don’t know their background, we don’t know why they came to the document, and we don’t always remember that communication is more than just transmission.

Learning social rules is an ongoing process. It’s exhausting and difficult for many. Noirin refers to it as running in emulation. It’s like booting up a virtual machine to try and understand how something works in a difference context.

Oblique Strategies from Brian Eno was mentioned. It’s a way to help with creative problem solving. So when you’re stuck on a problem you can draw a card and apply that to the situation you’re facing. They’re not so much advice as a means to remind you how to think about problems.

Noirin discussed a few strategies for making our docs more emotionally engaging. First, we have to understand expectations. This applies to many aspects of our communication. The expectations our users have when reading documentation, when a boss reads our email, and more are important to how our text is received.

Most people assume their incoming communication has tact attached to it. We don’t assume communication is rude and abrasive. When it is, it surprises us. To solve this Noirin recommends we all attach a little tact to our output.

The next strategy Noirin covered is to argue that zero is not negative. If we can try to recognize when we’re projecting negative emotions in to a space that has no emotion we’re assuming. If it’s unclear what the emotional context is, ask. That’s the only way you can be clear about the intent of a message.

If you transmitted a message and a different message is received the onus is on you. You have to make sure your audience understands what they’re reading. Communication is a two way medium and if something is misunderstood it’s not entirely the reader’s fault. The reader is the only thing that matters with documentation. When in doubt we should rephrase something. If you have to ask whether a sentence is grammatically correct, it doesn’t matter. Rewrite it.

The readers of your documentation don’t know how you feel. Our readers can’t see us, they can’t hear us, they don’t know if we’re having a good day or a bad day. Stating our emotions is a good way to get conversations back on track. If a conversation over text isn’t going well, state your emotions.

Noirin recommends moving through communication flow like this: email, IM or IRC, voice, video, real life. Those are in increasing order of fidelity. If email doesn’t work, move to IM. If that doesn’t work, move to voice. As she put it, “the fastest way to pass a Turing test is to pick up the phone.”

Perception is reality. If someone feels attacked, for example, they will shut down. That inherently makes their feeling reality. Reality is not what you’re trying to communicate, it’s what they’re feeling.

Noirin’s last point is that if it doesn’t matter, do it their way. Don’t be a stubborn fool just because you want it done your way.

Write the Docs: Teresa Talbot – Technically Communicating Internationally

I’m at Write the Docs today in Portland and will be post­ing notes from ses­sions through­out the day. These are all posted right after a talk fin­ishes so they’re rough around the edges.

Teresa continued afternoon sessions by talking about the why and how of working abroad. She’s been a technical writer for about 20 years and spent 7 of those years working outside of the United States.

There’s a strong demand for technical writers outside of the US. This is largely because English is the most-spoken second language in the world. Lots of tech companies abroad wanted English-speaking technical communicators. Teresa has even worked for companies in the UK because they sought a US-specific technical translator.

The first route to working abroad is to have a company sponsor. This is what allowed Teresa to work and live in Holland. While this gives you certain benefits like state-run healthcare and whatnot it also more directly submits you to the more unique aspects of that country’s tax and employment laws.

Another route is to work as a contracting American for an international company. Teresa did this for a translation company working in Japan. Since Teresa was billing from a US social security number she didn’t need a work permit which made things more convenient.

You can also start a company abroad. Teresa did this in Bulgaria and while she had a business license she never did get a residency permit.

Overall Teresa’s talk dove in to lots of the nitty gritty in working abroad. Not the best content for notes but I noted what I could. 🙂