Skip to content

Keeping GitHub OAuth Tokens Safe

While making your source code available in a public GitHub repository is awesome, it's important to be sure you don't accidentally commit your passwords, secrets, or anything else that other people shouldn't know.

Starting today you can commit more confidently, knowing that we will email you if you push one of your OAuth Access Tokens to any public repository with a git push command. As an extra bonus, we'll also revoke your token so it can't be used to perform any unauthorized actions on your behalf.

For more tips on keeping your account secure, see "Keeping your SSH keys and application access tokens safe" in GitHub Help.

GitHub Security Bug Bounty program turns one

It's already been a year since we launched the GitHub Security Bug Bounty, and, thanks to bug reports from researchers across the globe, 73 previously unknown security vulnerabilities in our applications have been identified and fixed.

Bugs squashed

Of 1,920 submissions in the past year, 869 warranted further review, helping us to identify and fix vulnerabilities fitting nine of the OWASP top 10 vulnerability classifications. 33 unique researchers earned a cumulative $50,100 for the 57 medium to high risk vulnerabilities they reported.

Bounty submissions per week

We also saw some incredibly involved and creative vulnerabilities reported.

Our top submitter, @adob, reported a persistent DOM based cross-site scripting vulnerability, relying on a previously unknown Chrome browser bug that allowed our Content Security Policy to be bypassed.

Our second most prolific submitter, @joernchen, reported a complex vulnerability in the communication between two of our backend services that could allow an attacker to set arbitrary environment variables. He followed that up by finding a way to achieve arbitrary remote command execution by setting the right environment variables.

New year, higher payouts

To kick off our Bug Bounty Program's second year, we're doubling the maximum bounty payout, from $5000 to $10000. If you've found a vulnerability that you'd like to submit to the GitHub security team for review, send us the details, including the steps required to reproduce the bug. You can also follow @GitHubSecurity for ongoing updates about the program.

Thanks to everyone who made the first year of our Bug Bounty a success. Happy hunting in 2015!

How to write the perfect pull request

As a company grows, people and projects change. To continue to nurture the culture we want at GitHub, we've found it useful to remind ourselves what we aim for when we communicate. We recently introduced these guidelines to help us be our best selves when we collaborate on pull requests.

Approach to writing a Pull Request

  • Include the purpose of this Pull Request. For example:
    This is a spike to explore…
    This simplifies the display of…
    This fixes handling of…
  • Consider providing an overview of why the work is taking place (with any relevant links); don’t assume familiarity with the history.
  • Remember that anyone in the company could be reading this Pull Request, so the content and tone may inform people other than those taking part, now or later.
  • Be explicit about what feedback you want, if any: a quick pair of :eyes: on the code, discussion on the technical approach, critique on design, a review of copy.
  • Be explicit about when you want feedback, if the Pull Request is work in progress, say so. A prefix of “[WIP]” in the title is a simple, common pattern to indicate that state.
  • @mention individuals that you specifically want to involve in the discussion, and mention why. (“/cc @jesseplusplus for clarification on this logic”)
  • @mention teams that you want to involve in the discussion, and mention why. (“/cc @github/security, any concerns with this approach?”)

Offering feedback

  • Familiarize yourself with the context of the issue, and reasons why this Pull Request exists.
  • If you disagree strongly, consider giving it a few minutes before responding; think before you react.
  • Ask, don’t tell. (“What do you think about trying…?” rather than “Don’t do…”)
  • Explain your reasons why code should be changed. (Not in line with the style guide? A personal preference?)
  • Offer ways to simplify or improve code.
  • Avoid using derogatory terms, like “stupid”, when referring to the work someone has produced.
  • Be humble. (“I’m not sure, let’s try…”)
  • Avoid hyperbole. (“NEVER do…”)
  • Aim to develop professional skills, group knowledge and product quality, through group critique.
  • Be aware of negative bias with online communication. (If content is neutral, we assume the tone is negative.) Can you use positive language as opposed to neutral?
  • Use emoji to clarify tone. Compare “:sparkles: :sparkles: Looks good :+1: :sparkles: :sparkles:” to “Looks good.”

Responding to feedback

  • Consider leading with an expression of appreciation, especially when feedback has been mixed.
  • Ask for clarification. ("I don’t understand, can you clarify?")
  • Offer clarification, explain the decisions you made to reach a solution in question.
  • Try to respond to every comment.
  • Link to any follow up commits or Pull Requests. (“Good call! Done in 1682851”)
  • If there is growing confusion or debate, ask yourself if the written word is still the best form of communication. Talk (virtually) face-to-face, then mutually consider posting a follow-up to summarize any offline discussion (useful for others who be following along, now or later).

These guidelines were inspired partly by Thoughtbot's code review guide.

Our guidelines suit the way we work, and the culture we want to nurture. We hope you find them useful too.

Happy communicating!

How GitHub uses GitHub to document GitHub

Providing well-written documentation helps people understand, make use of, and contribute back to your project, but it's only half of the documentation equation. The underlying system used to serve documentation can make life easier for the people writing it—whether that's just you or the team you work with.

The hardest part about documentation should be deciding which words to use, not configuring tools or figuring out how to deploy updates. Members of the GitHub Documentation Team come from backgrounds where crude XML-based authoring tools and complicated CMSs are the norm. We didn't want to use those tools here so we've spent a good deal of time configuring our own documentation workflow and set-up.

We've talked before about how we use GitHub to build GitHub; here's a look at how we use GitHub Pages to serve our GitHub Help documentation to millions of readers each month.

Our previous setup

A few months ago, we migrated our Help site from a custom-built Rails app to a static Jekyll site hosted on GitHub Pages. Our previous Help site consisted of two separate repositories:

  • A Rails application, which was responsible for managing the site, the assets, and the search implementation.
  • The actual content, which was just a grouping of Markdown files.

Our Rails app was hosted on a third-party service; as updates were made to the code, we deployed them with Hubot and Chatops, as we do with the rest of GitHub.

Our typical writing workflow looked like this:

  • The Documentation Team took note when a new feature was shipping.
  • We'd create a new issue to track the feature.
  • When we were ready, we'd open a pull request to start iterating on the content.
  • When the content was in a good place, we'd @mention the team (@github/docs) and have a peer editor review our words.
  • When the feature was ready to ship, we'd merge the pull request. A webhook would fire from the content repository to our hosted Rails app; the webhook's payload updated a database row containing the article's raw Markdown.

Here's an example conversation from @neveret and @bernars showing a bit of our normal editing workflow:

Sample conversation

Working with pull requests was fantastic, because it directly matched the GitHub flow we use across the company. And we liked writing in Markdown, because its syntax enabled us to effectively describe new features in no time.

However, our Rails implementation was a fairly complicated setup:

  • Our reliance on an external host required dedicated employees on our Engineering, Ops, and Security teams to monitor the site and respond to incidents as they arose.
  • Our Documentation team couldn't easily view local changes to the content. Even though we wrote in Markdown, we'd still need to set up a local instance of the Rails app and run a script to import the content into a database, just to see how it would look on the site.
  • We were constantly tweaking the Rails server, but noticed that each request a reader made to the site was still slow. The HTML was being generated on-the-fly, requiring calls to the database and constantly iterating on stronger caching strategies.

We knew we could do much better.

Our new setup

When Jekyll 2.0 was released, we saw an opportunity to replace our existing setup with a static site. The new Collections document type lets you define a file structure that matches your needs. In addition, Jekyll 2.0 introduced support for Sass and CoffeeScript assets, which simplifies writing front-end code.

Open source is great because it's, well, open. As we migrated to Jekyll, we made several pull requests to components of Jekyll, making it a better tool for users of GitHub Pages.

Very little of our original workflow has changed. We still write in Markdown and we still open pull requests for an editorial review. When the pull request is merged, the GitHub Pages site is automatically built and deployed within seconds.

Here's a quick rundown on how we're using core Jekyll features and a handful of plugins to implement the help site.

Gems we use

We intentionally rely on core Jekyll code as much as possible, to minimize our reliance on maintaining custom plugins.

Jekyll 2.0 introduced a new plugin type called a Converter that transforms any markup into HTML. This frees the writer up to compose content however she chooses, and Jekyll will just serve the final HTML. For example, you can write your posts in AsciiDoc, if that's your thing.

To that end, we wrote jekyll-html-pipeline, an implementation of our own open-source html-pipeline. This ensures that the content on our Help site looks the same as content everywhere on GitHub. We also wrote our own Markdown filter to provide some syntax extensions that make writing documentation much easier.

Search

With the previous Rails site, we were using an ElasticSearch provider that indexed our database and implemented a search system for our Help site.

Now, we use lunr-js to provide a faster client-side search experience. In sifting through our analytics, we found that the vast majority of our users relied on an external search provider to get to our documentation. It didn't make sense, during or after the migration, to expend much energy on a server-side search solution.

Content references

The Docs team really wanted to use "content references," or conrefs, when writing documentation. A conref allows you to write a chunk of text once and reuse it throughout the site. (The idea was borrowed from the DITA standard.)

The old Rails app wouldn't permit us to write reusable content, but now we can with the power of Jekyll's data files. For example, we've defined a file called conrefs.yml, and have a set of key-value strings that look something like this:

repositories:
  create_new:
    1. In the upper-right corner of any page, click {{ octicon-plus Plus symbol }}, and then click **New repository**.
      ![New repository menu](/assets/images/help/repository/repo-create.png)

Our keys are grouped by specificity (repositories.create_new); the values they contain are just plain Markdown ("In the upper-right corner..."). We can now reuse this single step across several pages of content that refer to creating a new repository by writing the appropriate Liquid syntax:

To start the process:

{{ site.data.conrefs.repositories.create_new }}
2. Do something else.
3. You're done!

As GitHub's UI evolves, we might need to change the image or rewrite the directional pointer. With a conref, we only have to make the change in one location, rather than a dozen.

Versioned documentation

Another goal of the move was to be able to provide versioned Help documentation. With the release of Enterprise 2.0.0, we began to provide different content sets for the previous 11.10.340 and the current 2.0 releases. In order to do that, we build the Jekyll site with a special audience flag, and check in the generated HTML as part of our Pages repository.

For example, in our config.yml file, we set a key called audience to 11.10.340. If a feature exists that's available in Enterprise 2.0 but not 11.10.340, we demarcate the section using Liquid tags like this:

{% if site.audience != '11.10.340' %}

This new feature...

{% endif %}

Again, this is just taking advantage of core features in Jekyll; we didn't need to build or maintain any aspect of this.

Testing our site

Just because the site is static doesn't mean that we should avoid test-driven development writing.

Our first line of defense for testing content has always been html-proofer. This tool helps verify that none of our links and images are broken by quickly validating every URL in our built site.

Rubyists are familiar with using Capybara to simulate website interactions in their tests. Would it be crazy to implement a similar idea with our static site? Nope! Our own @bkeepers wrote a blog post four years ago talking about this very problem. With that, we were able to write stronger tests that covered our content and our site behavior. For example, we check that a referenced conref is valid (by looking up the key in the YAML file) or that our JavaScript is functioning properly.

Our Help documentation runs with CI to ensure that nothing broken ever gets in front of our readers:

Our help-docs CI build

Speed

As mentioned above, our new Pages implementation is significantly faster than the old Rails app. This is partly because the site is a bunch of static HTML files—nothing is fetched from a database. More significantly, we've already spent a lot of time configuring our Pages servers to be blazing fast for everyone. The same advantages we have, like serving assets off of a CDN, are also available to every GitHub user.

Help docs GA site load times

Making GitHub Pages work for you

Documentation teams across GitHub can take advantage of the GitHub Flow, Jekyll 2.0, and GitHub Pages to produce high-quality documentation. The benefits that GitHub Pages provides to our Documentation team is already available to any user running a GitHub Pages site.

With our move to Pages, we didn't rebuild any new components. We spent far less time building anything and more time discussing a workflow that made sense for our team and company. By committing to using the same hosting features we provide to every GitHub user, we were able to provide better documentation, faster. Our internal workflow has made us more productive, and enabled us to provide features we never could before, such as versioned content.

If you have any questions on our setup, past or present, we're happy to help!

Improving GitHub's SSL setup

To keep GitHub as secure as possible for every user, we will remove RC4 support in our SSL configuration on github.com and in the GitHub API on January 5th 2015.

RC4 has a number of cryptographic weaknesses that may be exploited, impacting the security of your data. More details about these vulnerabilities are listed in the current IETF draft.

If you are using Internet Explorer on Windows XP, you will no longer be able to access github.com once this change takes place. Windows XP only supports outdated SSL ciphers, is no longer supported by Microsoft, and contains a known critical security problem in its SSL implementation.

We strongly recommend that Windows XP users upgrade to a newer version of Windows. If this is not possible, you will need to use Chrome or Firefox to access GitHub on Windows XP. The git client available at git-scm.com still works on Windows XP.

Vulnerability announced: update your Git clients

A critical Git security vulnerability has been announced today, affecting all versions of the official Git client and all related software that interacts with Git repositories, including GitHub for Windows and GitHub for Mac. Because this is a client-side only vulnerability, github.com and GitHub Enterprise are not directly affected.

The vulnerability concerns Git and Git-compatible clients that access Git repositories in a case-insensitive or case-normalizing filesystem. An attacker can craft a malicious Git tree that will cause Git to overwrite its own .git/config file when cloning or checking out a repository, leading to arbitrary command execution in the client machine. Git clients running on OS X (HFS+) or any version of Microsoft Windows (NTFS, FAT) are exploitable through this vulnerability. Linux clients are not affected if they run in a case-sensitive filesystem.

We strongly encourage all users of GitHub and GitHub Enterprise to update their Git clients as soon as possible, and to be particularly careful when cloning or accessing Git repositories hosted on unsafe or untrusted hosts.

Repositories hosted on github.com cannot contain any of the malicious trees that trigger the vulnerability because we now verify and block these trees on push. We have also completed an automated scan of all existing content on github.com to look for malicious content that might have been pushed to our site before this vulnerability was discovered. This work is an extension of the data-quality checks we have always performed on repositories pushed to our servers to protect our users against malformed or malicious Git data.

Updated versions of GitHub for Windows and GitHub for Mac are available for immediate download, and both contain the security fix on the Desktop application itself and on the bundled version of the Git command-line client.

In addition, the following updated versions of Git address this vulnerability:

  • The Git core team has announced maintenance releases for all current versions of Git (v1.8.5.6, v1.9.5, v2.0.5, v2.1.4, and v2.2.1).

  • Git for Windows (also known as MSysGit) has released maintenance version 1.9.5.

  • The two major Git libraries, libgit2 and JGit, have released maintenance versions with the fix. Third party software using these libraries is strongly encouraged to update.

More details on the vulnerability can be found in the official Git mailing list announcement and on the git-blame blog.

GitHub Pages Legacy IP Deprecation

Update: We've extended the deprecation deadline to February 2, 2015 to give Pages users more time to update their DNS records.


If you use a custom domain with GitHub Pages, please verify that your domain's DNS settings are properly configured to point to the most up-to-date GitHub IP addresses. This will ensure that your site remains available after December 1st, 2014.

GitHub Pages allows you to set up a custom domain by adding the domain to a CNAME file, and pointing your domain's DNS record to GitHub's servers. If you don't use this feature, for example, if your GitHub Pages site is published as username.github.io, you don't need to take any action at this time. Please enjoy this animated GIF for being awesome.

Why the change?

Nearly a year ago, we announced improvements to how we serve GitHub Pages sites. Today we're making that change permanent by deprecating our old GitHub Pages infrastructure. If your custom domain is pointed at these legacy IPs, you'll need to update your DNS configuration immediately to keep things running smoothly.

How long do I have to make the switch?

Starting the week of November 10th, pushing to a misconfigured site will result in a build error and you will receive an email stating that your site's DNS is misconfigured. Your site will remain available to the public, but changes to your site will not be published until the DNS misconfiguration is resolved.

For the week of November 17th, there will be a week-long brownout for improperly configured GitHub Pages sites. If your site is pointed to a legacy IP address, you will receive a warning message that week, in place of your site's content. Normal operation will resume at the conclusion of the brownout.

Starting December 1st, custom domains pointed to the deprecated IP addresses will no longer be served via GitHub Pages. No repository or Git data will be affected by the change.

How do I know if I'm affected?

If you have a GitHub Pages site pointed at one of the old IP addresses, you will receive an email from us this week letting you know that you need to make the change (and should have been receiving an email on each push for the past several months). If the suspense is killing you, there's a few ways to check yourself:

  1. If you're using the GitHub Pages Gem, update to the latest version, and run github-pages health-check from your site's root directory. That'll make sure your site's DNS is in ship-shape.

  2. Don't have the GitHub Pages Gem?

    • If you're on a Mac or Linux machine, simply paste this command into a terminal window, replacing your-domain.com with, your site's domain. dig your-domain.com | grep -E '(207.97.227.245|204.232.175.78|199.27.73.133)' || echo "OK". If you see the word "OK", you're all set.
    • On a Windows machine, you'll want to run nslookup your-domain.com and ensure that the output does not include any of the deprecated IP addresses (207.97.227.XXX, 204.232.175.XX, or 199.27.73.XXX).
  3. From your domain registrar's web interface, head on over to your domain's DNS settings. Your domain should either be a CNAME record to username.github.io, an ALIAS record, or an A record pointing to an IP address that begins 192.30.252.XXX.

Okay, I'm sold. What do I need to do?

If one of the methods above indicate that your DNS is misconfigured, or if you just want to be sure, please follow the instructions for setting up a custom domain with GitHub Pages.

Questions? We're here to help.

Happy publishing!

Security vulnerability in bash addressed

Update: 2014-09-29 23:10 UTC

We have published an update to the Git Shell tools for GitHub for Windows, which resolves the bash vulnerabilities CVE-2014-6271, CVE-2014-7169, CVE-2014-7186 and CVE-2014-7187. If you are running GitHub for Windows, we strongly encourage you to upgrade. You can check if you are on the latest version, and upgrade if needed, by opening "Tools" -> "About GitHub for Windows..."


Update: 2014-09-28 17:30 UTC

Two new bash vulnerabilities, CVE-2014-7186 and CVE-2014-7187, have been discovered. We have now released special patches of GitHub Enterprise using the latest upstream bash fix for CVE-2014-7186 and CVE-2014-7187. Upgrade instructions have been sent to all GitHub Enterprise customers, and we strongly encourage all customers to upgrade their instance using this latest release. GitHub.com remains unaffected by this vulnerability.


Update: 2014-09-26 00:22 UTC

Security patches released yesterday for the bash command vulnerability identified in CVE-2014-6271 turned out to be incomplete, and a new vulnerability, CVE-2014-7169, was identified. We have now released special patches of GitHub Enterprise using the latest upstream bash fix for CVE-2014-7169. Upgrade instructions have been sent to all GitHub Enterprise customers, and we strongly encourage all customers to upgrade their instance using this latest release. GitHub.com remains unaffected by this vulnerability.


Update: 2014-09-25 15:45 UTC

GitHub is closely monitoring new developments that indicate the existing bash patch for CVE-2014-6271 is incomplete. The fix for this new bash vulnerability is still in progress, but we will be releasing a new patch for GitHub Enterprise once it has been resolved. At this time, we still strongly encourage all GitHub Enterprise customers to update their instances using the patch made available yesterday.


This morning it was disclosed that Stephane Chazelas discovered a critical vulnerability in the GNU bash utility present on the vast majority of Unix and Linux systems. Using this vulnerability, an attacker can force the execution of arbitrary commands on an affected server. While these commands may not run with root privileges, they provide a significant vector for further exploitation of a system.

We have released special patches of GitHub Enterprise to fix this vulnerability, and have provided detailed instructions to all our Enterprise customers on how to upgrade their instance. An immediate upgrade is required.

None of the extensive penetration testing we've performed today has uncovered any vulnerability on GitHub.com, including git over SSH. As an added precaution, however, we have patched all systems to ensure the vulnerability is addressed.

Making MySQL Better at GitHub

At GitHub we say, "it's not fully shipped until it's fast." We've talked before about some of the ways we keep our frontend experience speedy, but that's only part of the story. Our MySQL database infrastructure dramatically affects the performance of GitHub.com. Here's a look at how our infrastructure team seamlessly conducted a major MySQL improvement last August and made GitHub even faster.

The mission

Last year we moved the bulk of GitHub.com's infrastructure into a new datacenter with world-class hardware and networking. Since MySQL forms the foundation of our backend systems, we expected database performance to benefit tremendously from an improved setup. But creating a brand-new cluster with brand-new hardware in a new datacenter is no small task, so we had to plan and test carefully to ensure a smooth transition.

Preparation

A major infrastructure change like this requires measurement and metrics gathering every step of the way. After installing base operating systems on our new machines, it was time to test out our new setup with various configurations. To get a realistic test workload, we used tcpdump to extract SELECT queries from the old cluster that was serving production and replayed them onto the new cluster.

MySQL tuning is very workload specific, and well-known configuration settings like innodb_buffer_pool_size often make the most difference in MySQL's performance. But on a major change like this, we wanted to make sure we covered everything, so we took a look at settings like innodb_thread_concurrency, innodb_io_capacity, and innodb_buffer_pool_instances, among others.

We were careful to only make one test configuration change at a time, and to run tests for at least 12 hours. We looked for query response time changes, stalls in queries per second, and signs of reduced concurrency. We observed the output of SHOW ENGINE INNODB STATUS, particularly the SEMAPHORES section, which provides information on work load contention.

Once we were relatively comfortable with configuration settings, we started migrating one of our largest tables onto an isolated cluster. This served as an early test of the process, gave us more space in the buffer pools of our core cluster and provided greater flexibility for failover and storage. This initial migration introduced an interesting application challenge, as we had to make sure we could maintain multiple connections and direct queries to the correct cluster.

In addition to all our raw hardware improvements, we also made process and topology improvements: we added delayed replicas, faster and more frequent backups, and more read replica capacity. These were all built out and ready for go-live day.

Making a list; checking it twice

With millions of people using GitHub.com on a daily basis, we did not want to take any chances with the actual switchover. We came up with a thorough checklist before the transition:

checklist

We also planned a maintenance window and announced it on our blog to give our users plenty of notice.

Migration day

At 5am Pacific Time on a Saturday, the migration team assembled online in chat and the process began:

butts

We put the site in maintenance mode, made an announcement on Twitter, and set out to work through the list above:

tweet

13 minutes later, we were able to confirm operations of the new cluster:

test

Then we flipped GitHub.com out of maintenance mode, and let the world know that we were in the clear.

all clear

Lots of up front testing and preparation meant that we kept the work we needed on go-live day to a minimum.

Measuring the final results

In the weeks following the migration, we closely monitored performance and response times on GitHub.com. We found that our cluster migration cut the average GitHub.com page load time by half and the 99th percentile by two-thirds:

Things got fast

What we learned

Functional partitioning

During this process we decided that moving larger tables that mostly store historic data to separate cluster was a good way to free up disk and buffer pool space. This allowed us to leave more resources for our "hot" data, splitting some connection logic to enable the application to query multiple clusters. This proved to be a big win for us and we are working to reuse this pattern.

Always be testing

You can never do too much acceptance and regression testing for your application. Replicating data from the old cluster to the new cluster while running acceptance tests and replaying queries were invaluable for tracing out issues and preventing surprises during the migration.

The power of collaboration

Large changes to infrastructure like this mean a lot of people need to be involved, so pull requests functioned as our primary point of coordination as a team. We had people all over the world jumping in to help.

Deploy day team map:

This created a workflow where we could open a pull request to try out changes, get real-time feedback, and see commits that fixed regressions or errors -- all without phone calls or face-to-face meetings. When everything has a URL that can provide context, it's easy to involve a diverse range of people and make it simple for them give feedback.

One year later..

A full year later, we are happy to call this migration a success — MySQL performance and reliability continue to meet our expectations. And as an added bonus, the new cluster enabled us to make further improvements towards greater availability and query response times. I'll be writing more about those improvements here soon.

GitHub for Windows now supports @mentions

Sometimes, you just want to grab someone's attention when you're finished with some cool code. That's why we've added support for GitHub's @mention feature inside GitHub for Windows. You can now @mention repository collaborators, and when you publish your changes they'll be notified that you'd like them to have a look.

mentions

If you already have GitHub for Windows installed, you can update by selecting 'About GitHub for Windows' in the gear menu on the top right. Otherwise, download the latest version from the GitHub for Windows website.

Say hello to GitHub for Windows 2.0

Two years ago we launched GitHub for Windows as the easiest way to use Git and GitHub on Windows. Today we're shipping a major update that helps you focus more on your work and gives you a more streamlined way of getting that work to and from GitHub.

Your work, emphasized

When you write code, your workspace should be as distraction free as possible. We've focused GitHub for Windows so that what you're working on is front and center.

GitHub for Windows 2.0

Everything you need in one screen

The less time you spend navigating through menus and options, the more you can focus on getting things done. Now your local repositories are always available in the left sidebar, and you can create, clone, and publish repositories without having to navigate to a new screen.

Creating and publishing repositories

The sidebar also groups your repositories by where they originated, so repositories associated with GitHub Enterprise are easy to distinguish from your personal projects and it's simple to switch between them.

More of GitHub locally

GitHub for Windows also now supports more of the GitHub feature set. You can pick an ignore file template for your project when you create a repository, and you can include emoji and gifs in your commit messages.

What are you waiting for?

If you have GitHub for Windows installed it will automatically update to the latest version. If you don't have it installed, download GitHub for Windows 2.0 at windows.github.com.

Security: Heartbleed vulnerability

On April 7, 2014 information was released about a new vulnerability (CVE-2014-0160) in OpenSSL, the cryptography library that powers the vast majority of private communication across the Internet. This library is key for maintaining privacy between servers and clients, and confirming that Internet servers are who they say they are.

This vulnerability, known as Heartbleed, would allow an attacker to steal the keys that protect communication, user passwords, even the system memory of a vulnerable server. This represents a major risk to large portions of private traffic on the Internet, including github.com.

Note: GitHub Enterprise servers are not affected by this vulnerability. They run an older OpenSSL version which is not vulnerable to the attack.

As of right now, we have no indication that the attack has been used against github.com. That said, the nature of the attack makes it hard to detect so we're proceeding with a high level of caution.

What is GitHub doing about this?

UPDATE: 2014-04-08 16:00 PST - All browser sessions that were active prior to the vulnerability being addressed have been reset. See below for more info.

We've completed a number of measures already and continue to work the issue.

  1. We've patched all our systems using the newer, protected versions of OpenSSL. We started upgrading yesterday after the vulnerability became public and completed the roll out today. We are also working with our providers to make sure they're upgrading their systems to minimize GitHub's exposure.

  2. We've recreated and redeployed new SSL keys and reset internal credentials. We have also revoked our older certs just to be safe.

  3. We've forcibly reset all browser sessions that were active prior to the vulnerability being addressed on our servers. You may have been logged out and have to log back into GitHub. This was a proactive measure to defend against potential session hijacking attacks that may have taken place while the vulnerability was open.

Prior to this incident, GitHub made a number of enhancement to mitigate attacks like this. We deployed Perfect Forward Secrecy at the end of last year, which makes it impossible to use stolen encryption keys to read old encrypted communication. We are working to find more opportunities like this.

What should you do about Heartbleed right now?

Right now, GitHub has no indication that the vulnerability has been used outside of testing scenarios. However, out of an abundance of caution, you can:

  1. Change your GitHub password. Be sure your password is strong; for more information, see What is a strong password?
  2. Enable Two-Factor Authentication.
  3. Revoke and recreate personal access and application tokens.

Stay tuned

GitHub works hard to keep your code safe. We are continuing to respond to this vulnerability and will post updates as things progress. For more information as it's available, keep an eye on Twitter or the GitHub Blog.

Denial of Service Attacks

On Tuesday, March 11th, GitHub was largely unreachable for roughly 2 hours as the result of an evolving distributed denial of service (DDoS) attack. I know that you rely on GitHub to be available all the time, and I'm sorry we let you down. I'd like to explain what happened, how we responded to it, and what we're doing to reduce the impact of future attacks like this.

Background

Over the last year, we have seen a large number and variety of denial of service attacks against various parts of the GitHub infrastructure. There are two broad types of attack that we think about when we're building our mitigation strategy: volumetric and complex.

We have designed our DDoS mitigation capabilities to allow us to respond to both volumetric and complex attacks.

Volumetric Attacks

Volumetric attacks are intended to exhaust some resource through the sheer weight of the attack. This type of attack has been seen with increasing frequency lately through UDP based amplification attacks using protocols like DNS, SNMP, or NTP. The only way to withstand an attack like this is to have more available network capacity than the sum of all of the attacking nodes or to filter the attack traffic before it reaches your network.

Dealing with volumetric attacks is a game of numbers. Whoever has more capacity wins. With that in mind, we have taken a few steps to allow us to defend against these types of attacks.

We operate our external network connections at very low utilization. Our internet transit circuits are able to handle almost an order of magnitude more traffic than our normal daily peak. We also continually evaluate opportunities to expand our network capacity. This helps to give us some headroom for larger attacks, especially since they tend to ramp up over a period of time to their ultimate peak throughput.

In addition to managing the capacity of our own network, we've contracted with a leading DDoS mitigation service provider. A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing.

Complex Attacks

Complex attacks are also designed to exhaust resources, but generally by performing expensive operations rather than saturating a network connection. Examples of these are things like SSL negotiation attacks, requests against computationally intensive parts of web applications, and the "Slowloris" attack. These kinds of attacks often require significant understanding of the application architecture to mitigate, so we prefer to handle them ourselves. This allows us to make the best decisions when choosing countermeasures and tuning them to minimize the impact on legitimate traffic.

First, we devote significant engineering effort to hardening all parts of our computing infrastructure. This involves things like tuning Linux network buffer sizes, configuring load balancers with appropriate timeouts, applying rate limiting within our application tier, and so on. Building resilience into our infrastructure is a core engineering value for us that requires continuous iteration and improvement.

We've also purchased and installed a software and hardware platform for detecting and mitigating complex DDoS attacks. This allows us to perform detailed inspection of our traffic so that we can apply traffic filtering and access control rules to block attack traffic. Having operational control of the platform allows us to very quickly adjust our countermeasures to deal with evolving attacks.

Our DDoS mitigation partner is also able to assist with these types of attacks, and we use them as a final line of defense.

So what happened?

At 21:25 UTC we began investigating reports of connectivity problems to github.com. We opened an incident on our status site at 21:29 UTC to let customers know we were aware of the problem and working to resolve it.

As we began investigating we noticed an apparent backlog of connections at our load balancing tier. When we see this, it typically corresponds with a performance problem with some part of our backend applications.

After some investigation, we discovered that we were seeing several thousand HTTP requests per second distributed across thousands of IP addresses for a crafted URL. These requests were being sent to the non-SSL HTTP port and were then being redirected to HTTPS, which was consuming capacity in our load balancers and in our application tier. Unfortunately, we did not have a pre-configured way to block these requests and it took us a while to deploy a change to block them.

By 22:35 UTC we had blocked the malicious request and the site appeared to be operating normally.

Despite the fact that things appeared to be stabilizing, we were still seeing a very high number of SSL connections on our load balancers. After some further investigation, we determined that this was an additional vector that the attack was using in an effort to exhaust our SSL processing capacity. We were able to respond quickly using our mitigation platform, but the countermeasures required significant tuning to reduce false positives which impacted legitimate customers. This resulted in approximately 25 more minutes of downtime between 23:05-23:30 UTC.

By 23:34 UTC, the site was fully operational. The attack continued for quite some time even once we had successfully mitigated it, but there were no further customer impacts.

What did we learn?

The vast majority of attacks that we've seen in the last several months have been volumetric in terms of bandwidth, and we'd grown accustomed to using throughput as a way of confirming that we were under attack. This attack did not generate significantly more bandwidth but it did generate significantly more packets per second. It didn't look like what we had grown to expect an attack to look like and we did not have the monitoring we needed to detect it as quickly as we would have liked.

Once we had identified the problem, it took us much longer than we'd like to mitigate it. We had the ability to mitigate attacks of this nature in our load balancing tier and in our DDoS mitigation platform, but they were not configured in advance. It took us valuable minutes to configure, test, and tune these countermeasures which resulted in a longer than necessary downtime.

We're happy that we were able to successfully mitigate the attack but we have a lot of room to improve in terms of how long the process takes.

Next steps?

  1. We have already made adjustments to our monitoring to better detect and alert us of traffic pattern changes that are indicative of an attack. In addition, our robots are now able to automatically enable mitigation for the specific traffic pattern that we saw during the attack. These changes should dramatically reduce the amount of time it takes to respond to a wide variety of attacks in the future and reduce their impact on our service.
  2. We are investigating ways that we can simulate attacks in a controlled way so that we can test our countermeasures on a regular basis to build additional confidence in both our mitigation tools and to improve our response time in bringing them to bear.
  3. We are talking to some 3rd party security consultants to review our DDoS detection and mitigation capability. We do a good job mitigating attacks we've seen before, but we'd like to more proactively plan for attacks that we haven't yet encountered.
  4. Hubot is able to route our traffic through our mitigation partner and to apply templates to operate our mitigation platform for known attack types. We've leveled him up with some new templates for attacks like this one so that he can help us recover faster in the future.

Summary

This attack was painful, and even though we were able to successfully mitigate the effects of it, it took us far too long. We know that you depend on GitHub and our entire company is focused on living up to the trust you place in us. I take problems like this personally. We will do whatever it takes to improve how we respond to problems to ensure that you can rely on GitHub being available when you need us.

Thanks for your support!

Passion Projects Short Documentary: Timoni West

We're now 11 installments into our talk series Passion Projects, which we created to help surface and celebrate the work of incredible women in the tech industry.

We sat down with past speaker Timoni West to talk a little more about her background in design and more specifically, the role the Internet is playing in making data available and consumable for everyday people.

Since filming, Timoni has started working with Alphaworks.

Timezone-aware contribution graphs

Today we've made your contribution graphs timezone-aware. GitHub is used everywhere and we want to reflect that in our features. If you happen to work from Japan, Australia or Ulan Bator, we want to count your contributions from your perspective.

When counting commits, we use the timezone information present in the timestamps for those commits. Pull requests and issues opened on the web will use the timezone of your browser. If you use the API you can also specify your timezone.

We don't want to mess up your current contribution streaks, so only contributions after Monday 10 March 2014 (Temps Universel Coordonné) will be timezone-aware.

Enjoy your time(zone)!

Something went wrong with that request. Please try again.