Instagram

The Instagram Engineering Blog Has A New Location

In order to streamline our internal blog operations, all future Instagram Engineering content will be posted on the Engineering at Meta blog located here.This will allow us to post more regularly about the novel engineering work being done at Instagram...

Instagram Data Saver Mode

We recently shipped Data Saver Mode, a new feature on Instagram for Android that helps the app consume less mobile data. In this post…Continue reading on Instagram Engineering »

10 Questions With Shupin Mao, Well-being Tech Lead

Shupin Mao is a senior software engineer at Facebook. During her last four years at the company, Shupin helped several teams and gained experience across Instagram and Facebook, including the Instagram Well-being team. Here she shares what got her into...

Making Instagram.com Faster: Code Size And Execution Optimizations (Part 4)

In recent years instagram.com has seen a lot of changes — we’ve launched stories, filters, creation tools, notifications, and direct messaging as well as a myriad of other features and enhancements. However, as the product grew, a side effect was that our web performance began to slow. Over the last year we made a conscious effort to improve this. This ongoing effort has thus far resulted in almost 50% cumulative improvement to our feed page load time. This series of blog posts will outline some of the work we’ve done that led to these improvements. In part 1 we talked about prefetching data, in part 2 we talked about improving performance by pushing data directly to the client rather than waiting for the client to request the data, and in part 3 we talked about cache-first rendering.Code size and execution optimizationsIn parts 1–3 we covered various ways that we optimized the loading patterns of the critical path static resources and data queries. However there is another key area we haven’t covered yet that’s crucial to improving web application performance, particularly on low-end devices — ship less code to the user — in particular, ship less JavaScript.

This might seem obvious, but there are a few points to consider here. There’s a common assumption in the industry that the size of the JavaScript that gets downloaded over the network is what’s important (i.e. the size post-compression), however we found that what’s really important is the size pre-compression as this is what has to be parsed and executed on the user’s device, even if it’s cached locally. This becomes particularly true if you have a site with many repeat users (and subsequent high browser cache hit rates) or users accessing your site on mobile devices. In these cases the parsing and execution performance of JavaScript on the CPU becomes the limiting factor rather than the network download time. For example, when we implemented Brotli compression for our JavaScript assets, we saw a nearly 20% reduction of post-compression size across the wire, but NO statistically significant reduction in the overall page load times as seen by end users.

On the other hand, we’ve found reductions in pre-compression JavaScript size have consistently led to performance improvements. It’s also worth making a distinction between JavaScript that is executed on the critical path and JavaScript that is dynamically imported after the main page has completed. While ideally it would be nice to reduce the total amount of JavaScript in an application, a key thing to optimize in the short term is the amount of eagerly executed JavaScript on the critical path (we track this with a metric we call Critical Bytes Per Route). Dynamically imported JavaScript that lazy loads is generally not going to have as significant an effect on page load performance, so it’s a valid strategy to move non-visible or interaction dependent UI components out of the initial page bundles and into dynamically imported bundles.

Refactoring our UI to reduce the amount of script on the critical path is going to be essential to improving performance in the long term — but this is a significant undertaking which will take time. In the short-term we worked on a number of projects to improve the size and execution efficiency of our existing code in ways that are largely transparent to product developers and require little refactoring of existing product code.Inline requiresWe bundle our frontend web assets using Metro (the same bundler used by React Native) so we get access to inline-requires out of the box. Inline-requires moves the cost of requiring/importing modules to the first time when they are actually used. This means that you can avoid paying execution cost for unused features (though you’ll still pay the cost of downloading and parsing them) and you can better amortize the execution cost over the application startup, rather than having a large amount of upfront computation.https://medium.com/media/d5a33f7e41003f8608fb033886056f4f/hrefTo see how this works in practice, lets take the following example code:https://medium.com/media/723e856b1fbaf18256da74cd88977ba5/hrefUsing inline requires this would get transformed into something like the following (you’ll find these inline requires by searching for r(d[ in the Instagram JS source in your browser developer tools)https://medium.com/media/ef2f1d723b91b0778b00348042b3b307/hrefAs we can see, it essentially works by replacing the local references to a required module with a function call to require that module. This means that unless the code from that module is actually used, the module is never required (and therefore never executed). In most cases this works extremely well, but there are a couple of edge cases to be aware of that can cause problems — namely modules with side effects. For example:https://medium.com/media/7a43ec94aaf8dc6abb832293b35f608e/hrefWithout inline requires, Module C would output {'foo':'bar'}, but when we enable inline-requires, it would output undefined, because B has an implicit dependency on A. This is a contrived example, but there are other real world cases where this can have effects i.e. what if a module does some logging as a part of its initialization - enabling inline-requires could cause this logging to stop happening. This is mostly preventable through linters that check for code that executes immediately at the module scope level, but there were some files we had to blacklist from this optimization such as runtime polyfills that need to execute immediately. After experimenting enabling inline requires across the codebase we saw an improvement in our Feed TTI (time to interactive) by 12% and Display Done by 9.8%, and decided that dealing with some of these minor edge cases was worth it for the performance improvements.Serving ES2017 bundles to modern browsersOne of the primary drivers that drove the adoption of compiler/transpiler tools like Babel was allowing developers to use modern JavaScript coding idioms but still have their applications work in browsers that lacked native support for these latest language features. Since then a number of other important use-cases for these tools arose including compile-to-js languages like Typescript and ReasonML, language extensions such as JSX and Flow type annotations, and build time AST manipulations for things like internationalization. Because of this, it’s unlikely that this extra compilation step is going to go disappear from frontend development workflows any time soon. However, with that said it’s worth revisiting if the original purpose for doing this (cross browser compatibility) is still necessary in 2019.

ES2015 and more recent features like async/await are now well supported across recent versions of most major browsers, so directly serving JavaScript containing these newer features is definitely possible — but there are two key questions that we had to answer first:Would enough users be able to take advantage of this to make the extra build complexity worthwhile (as you’d still need to maintain the legacy transpiling step for older browsers),And what (if any) are the performance advantages of shipping ES2015+ featuresTo answer the first question we first had to determine which features we were going to ship without transpiling/polyfilling and how many build variants we wanted to support for the different browsers. We settled on having two builds, one that would require support for ES2017 syntax, and a legacy build that would transpile back to ES5 (in addition we also added an optional polyfill bundle that would only be added for legacy browsers that lacked runtime support for more recent DOM API’s). Detecting support for these groups is done via some basic user-agent sniffing on the server side which ensures there is no runtime cost or extra roundtrip time from doing client-side detection of which bundles to load.https://medium.com/media/4971b12bb97da1ea896c832172f19d57/hrefWith this in mind, we ran the numbers and determined that 56% of users to instagram.com are able to be served the ES2017 build without any transpiling or runtime polyfills, and considering that this percentage is only going to go up over time — it seems like its worth supporting two builds considering the number of users able to utilize it.Percentage of Instagram users with ES2017 supported vs unsupported browsersAs for the second question — what are the performance advantages of shipping ES2017 directly — lets start by looking at what Babel actually does to transpile some common constructs back to ES5. In the left hand column is the ES2017 code, and on the right is the transpiled ES5 compatible version.Class (ES2017 vs ES5)Async/Await (ES2017 vs ES5)Arrow functions (ES2017 vs ES5)Rest parameters (ES2017 vs ES5)Destructuring assignment (ES2017 vs ES5)From this we can see that there is a considerable size overhead in transpiling these constructs (even if you amortize the cost of some of the runtime helper functions over a large codebase). In the case of Instagram, we saw a 5.7% reduction in the size of our core consumer JavaScript bundles when we removed all ES2017 transpiling plugins from our build. In testing we found that the end-to-end load times for the feed page improved by 3% for users who were served the ES2017 bundle compared with those who were not.Still a long way to goWhile the progress that has been made so far is impressive, the work we’ve done so far represents just the beginning. Theres still a huge amount of room left for improvement in areas such as Redux store/reducer modularization, better code splitting, moving more JavaScript execution off the critical path, optimizing scroll performance, adjusting to different bandwidth conditions, and more.

If you want to learn more about this work or are interested joining one of our engineering teams, please visit our careers page, follow us on Facebook or on Twitter.Making instagram.com faster: Code size and execution optimizations (Part 4) was originally published in Instagram Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Python At Scale: Strict Modules

Welcome to the third post in our series on Python at scale at Instagram! As we mentioned in the first post in the series, Instagram Server is a several-million-line Python monolith, and it moves quickly: hundreds of commits each day, deployed to production every few minutes.We’ve run into a few pain points working with Python at that scale and speed. This article takes a look at a few that we imagine might impact others as well.

Consider this innocuous-looking sample module:import refrom mywebframework import db, routeVALID_NAME_RE = re.compile("^[a-zA-Z0-9]+$")@route('/')
def home():
return "Hello World!"class Person(db.Model):
name: strWhen someone imports this module, what code will run?We’ll run a bunch of regex code to compile that string to a pattern object.We’ll run the @route decorator. Based on what we see here, we can assume that it's probably registering this view in some url mapping. This means that just by importing this module, we're mutating global state somewhere else.We’re going to run all the code inside the body of the Person class, which can include arbitrary code. And the Model base class might have a meta-class or an __init_subclass__ method, which is still more arbitrary code we might be running at import.Pain area one: slow startup and reloadThe only line of code in this module that (probably) doesn’t run on import is return "Hello World!", but we can't even say that for sure! So by just importing this simple eight line module (not even doing anything with it yet!), we are probably running hundreds, if not thousands of lines of Python code, not to mention modifying a global URL mapping somewhere else in our program.

So what? This is part of what it means for Python to be a dynamic, interpreted language. This lets us do all kinds of useful meta-programming. What's wrong with that?

Nothing is wrong with it, when you're working with relatively small codebases and teams, and you can guarantee some level of discipline in how you use these features. But some aspects of this dynamism can become a concern when you have millions of lines of code worked on by hundreds of developers, many of whom are new to Python.

For example, one of the great things about Python is how fast you can iterate with it: make a change and see the result, no compile needed! But with a few million lines of code (and a messy dependency graph), that advantage starts to turn sour.

Our server startup takes over 20s, and sometimes regresses to more like a minute if we aren't paying attention to keeping it optimized. That means 20-60 seconds between a developer making a change and being able to see the results of that change in their browser, or even in a unit test. This, unfortunately, is the perfect amount of time to get distracted by something shiny and forget what you were doing. Most of that time is spent literally just importing modules, creating function and class objects.

In some ways, that's no different from waiting for another language to compile. But typically compilation can be incremental: you can just recompile the stuff you changed and things that directly depend on it, so many smaller changes can compile quickly. But in Python, because imports can have arbitrary side effects, there is no safe way to incrementally reload our server. No matter how small the change, we have to start from scratch every time, importing all those modules, re-creating all those classes and functions, re-compiling all of those regular expressions, etc. Usually 99% of the code hasn't changed since last time we reloaded the server, but we have to re-do all that slow work anyway.

In addition to slowing down developers, this is a significant amount of wasted compute in production, too, since we continuously deploy and are thus reloading the site on production servers constantly all day long.

So that's our first pain point: slow server startup and reload due to lots of wasted repeat work at import time.Pain area two: unsafe import side effectsHere’s another thing we often find developers doing at import time: fetching configuration from a network configuration source.MY_CONFIG = get_config_from_network_service()In addition to slowing down server startup even further, this is dangerous, too. If the network service is not available, we won’t just get a runtime error failing certain requests, our server will fail to start up.

Let’s make this a bit worse, and imagine that someone has added some import-time code in another module that does some critical initialization of the network service. They don’t know where to put this code, so they stick it in some module that happens to get imported pretty early on. Everything works, so they move on.

But then someone else comes along, adds an innocuous import in some other part of the codebase, and through an import chain twelve modules deep, it causes the config-fetching module to now be imported before the one that does the initialization.

Now we’re trying to use the service before it’s initialized, so it blows up. In the best case, where the interaction is fully deterministic, this could still result in a developer tearing their hair out for an hour or two trying to understand why their innocent change is causing something unrelated to break. In a more complex case where it’s not fully deterministic, this could bring down production. And there’s no obvious way to generically lint against or prevent this category of issue.

The root of the problem here is two factors that interact badly:
1) Python allows modules to have arbitrary and unsafe import side effects, and
2) the order of imports is not explicitly determined or controlled, it’s an emergent property of the imports present in all modules in the entire system (and can also vary based on the entry point to the system).Pain area 3: mutable global stateLet’s look at one more category of common errors.def myview(request):
SomeClass.id = request.GET.get("id")Here we’re in a view function, and we’re attaching an attribute to some class based on data from the request. Likely you’ve already spotted the problem: classes are global singletons, so we’re putting per-request state onto a long-lived object, and in a long-lived web server process, that has the potential to pollute every future request in that process.

The same thing can easily happen in tests, if people try to monkeypatch without a contextmanager like mock.patch. The effect here is pollution of all future tests run in that process, rather than pollution of all future requests. This is a huge cause of flakiness in our test suite. It's so bad, and so hard to thoroughly prevent, that we have basically given up and are moving to one-test-per-process isolation instead.

So that's a third pain point for us. Mutable global state is not merely available in Python, it's underfoot everywhere you look: every module, every class, every list or dictionary or set attached to a module or class, every singleton object created at module level. It requires discipline and some Python expertise to avoid accidentally polluting global state at runtime of your program.Enter strict modulesOne reasonable take might be that we’re stretching Python beyond what it was intended for. It works great for smaller teams on smaller codebases that can maintain good discipline around how to use it, and we should switch to a less dynamic language.

But we’re past the point of codebase size where a rewrite is even feasible. And more importantly, despite these pain points, there’s a lot more that we like about Python, and overall our developers enjoy working in Python. So it’s up to us to figure out how we can make Python work at this scale, and continue to work as we grow.

We have an idea: strict modules.

Strict modules are a new Python module type marked with __strict__ = True at the top of the module, and implemented by leveraging many of the low-level extensibility mechanisms already provided by Python. A custom module loader parses the code using the ast module, performs abstract interpretation on the loaded code to analyze it, applies various transformations to the AST, and then compiles the modified AST back into Python byte code using the built-in compile function.Side-effect-free on importStrict modules place some limitations on what can happen at module top-level. All module-level code, including decorators and functions/initializers called at module level, must be pure (side-effect free, no I/O). This is verified statically at compile time via the abstract interpreter.

This means that strict modules are side-effect-free on import: bad interactions of import-time side effects are no longer possible! Because we verify this with abstract interpretation that is able to understand a large subset of Python, we avoid over-restricting Python’s expressiveness: many types of dynamic code without side effects are still fine at module level, including many kinds of decorators, defining module-level constants via list or dictionary comprehensions, etc.

Let’s make that a bit more concrete with an example. This is a valid strict module:"""Module docstring."""
__strict__ = Truefrom utils import log_to_networkMY_LIST = [1, 2, 3]
MY_DICT = {x: x+1 for x in MY_LIST}def log_calls(func):
def _wrapped(*args, **kwargs):
log_to_network(f"{func.__name__} called!")
return func(*args, **kwargs)
return _wrapped@log_calls
def hello_world():
log_to_network("Hello World!")We can still use Python normally, including dynamic code such as a dictionary comprehension and a decorator used at module level. It’s no problem that we talk to the network within the _wrapped function or within hello_world, because they are not called at module level. But if we moved the log_to_network call out into the outer log_calls function, or we tried to use a side-effecting decorator like the earlier @route example, or added a hello_world() call at module level, this would no longer compile as a strict module.

How do we know that the log_to_network or route functions are not safe to call at module level? We assume that anything imported from a non-strict module is unsafe, except for certain standard library functions that are known safe. If the utils module is strict, then we’d rely on the analysis of that module to tell us in turn whether log_to_network is safe.

In addition to improving reliability, side-effect-free imports also remove a major barrier to safe incremental reload, as well as unlocking other avenues to explore speeding up imports. If module-level code is side-effect-free, we can safely execute individual statements in a module lazily on-demand when module attributes are accessed, instead of eagerly all at once. And given that the shape of all classes in a strict module are fully understood at compile time, in the future we could even try persisting module metadata (classes, functions, constants) resulting from module execution in order to provide a fast-path import for unchanged modules that doesn’t require re-executing the module-level byte-code from scratch.Immutability and slotsStrict modules and classes defined in them are immutable after creation. The modules are made immutable by internally transforming the module body into a function with all of the global variables accessed as closure variables. These changes greatly reduce the surface area for accidental mutation of global state, though mutable global state is still available if you opt-in via module-level mutable containers.

Classes defined in strict modules must also have all members defined in __init__ and are automatically given __slots__ by the module loader’s AST transformation, so it’s not possible to tack on additional ad-hoc instance attributes later. So for example, in this class:class Person:
def __init__(self, name, age):
self.name = name
self.age = ageThe strict-modules AST transformation will observe the assignments to attributes name and age in __init__ and add an implicit __slots__ = ('name', 'age') to the class, preventing assignment of any other attributes to instances of the class. (If you are using type annotations, we will also pick up class-level attribute type declarations such as name: str and add them to the slots list as well.)

These restrictions don’t just make the code more reliable, they help it run faster as well. Automatically transforming classes to add __slots__ makes them more memory efficient and eliminates per-instance dictionary lookups, speeding up attribute access. Transforming the module body to make it immutable also eliminates dictionary lookups for accessing top-level variables. And we can further optimize these patterns within the Python runtime for further benefits.What’s next?Strict modules are still experimental. We have a working prototype and are in the early stages of rolling it out in production. We hope to follow up on this blog post in the future, with a report on our experience and a more detailed review of the implementation. If you’ve run into similar problems and have thoughts on this approach, we’d love to hear them!Many thanks to Dino Viehland and Shiyu Wang, who implemented strict modules and contributed to this post.

If you want to learn more about this work or are interested joining one of our engineering teams, please visit our careers page, follow us on Facebook or on Twitter.Python at Scale: Strict Modules was originally published in Instagram Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Implementing Dark Mode In IOS 13

One of the most exciting announcements at WWDC this year was the introduction of platform-wide dark mode in iOS 13. During WWDC a group of enthusiastic iOS engineers and designers from Instagram’s design systems team banded together to begin plotting out what it would take to adopt dark mode in our app. This week’s update to Instagram includes full support for iOS dark mode. This took months of work and collaboration between numerous design and engineering teams in the company. As such, we wanted to take some time to share how we approached adopting dark mode and some of the obstacles we encountered along the way.API PhilosophyApple did an excellent job shaping how dark mode works in iOS 13. Most of the heavy lifting is done on your behalf by UIKit. Because of this, one of the key principles we had when building out dark mode support in our app was that we should “stand on the shoulders of giants” and try to stick with Apple’s APIs as much as possible. This is beneficial for several reasons.Ease of use — UIKit does most of the work in selecting appropriate colors and transitioning between light mode and dark mode. If we wrote our own APIs we’d have to handle this ourselves.Maintainability — Apple maintains the APIs so we don’t have to. Any wrappers we have can ultimately be switched over to just use UIKit APIs as soon as our minimum supported OS version is iOS 13.Familiarity — Newcomers to Instagram’s iOS codebase who are familiar with how UIKit does dark mode will feel right at home.That being said, we didn’t use UIKit’s APIs alone since most developers in the company and our build systems are all still using Xcode 10, and introducing iOS 13 APIs would cause build breakages. We went with the approach of writing thin wrappers around UIKit APIs that are compatible with Xcode 10 and iOS 12.Another principle we followed was to introduce as few APIs as possible, and only when needed. The key reason for this was to reduce complexity for product teams adopting dark mode: it’s harder to misunderstand or misuse APIs if there are fewer of them. We started off with just wrappers around dynamic colors and a semantic color palette that our design systems team created, then introduced additional APIs over time as the need grew within the company. To increase awareness and ensure steady adoption, whenever we introduced a new API we announced it in an internal dark mode working group and documented it in an internal wiki page for the project.Primitives and ConceptsApple defines some handy dark mode primitives and concepts, and since we decided to build on top of their APIs we embraced these as well. Covering them at a high level, we have.Dynamic colors — Colors that change in response to light mode/dark mode changes. Also can change in response to “elevation” and accessibility settings.Dynamic images — Similar to dynamic colors, these are images that change in response to light mode/dark mode changes.Semantic colors — Named dynamic colors that serve a specific purpose. For example “destructive button color” or “link text color”.Elevation level — Things presented modally in dark mode change colors very slightly to demonstrate that they’re a layer on top of the underlying UI. This concept largely hasn’t existed in light mode because dark dimming layers are sufficient to differentiate modal layers presented on top of others.Building UIKit WrappersOne of the key APIs iOS 13 introduces for dark mode support is UIColor’s +colorWithDynamicProvider: method, which generates colors that automatically adapt to dark mode. This was the very first API we sought to wrap for use within Instagram and is still one of our most used dark mode APIs. We’ll walk through implementing it as a case study in building a backwards-compatible wrapper.

The first step in building such an API is defining a macro that allows us to conditionally compile out code for people that are still using stable versions of Xcode. This is what ours looks like:https://medium.com/media/efe5c63d90ef011b29b593310ecbc250/hrefNext we declare a wrapper function. Our wrapper for dynamic colors looks like this:https://medium.com/media/a4b98240258d935aebd4dcf92babe6e7/hrefWithin this function we use our macro to ensure that developers using older versions of Xcode can still compile. We also introduce a runtime check so that the app continues to function normally on older versions of iOS. If both checks pass we simply call into the iOS 13 +colorWithDynamicProvider: API, otherwise we fall back to the light mode variant.https://medium.com/media/1dce7091c966e653da6424e97a94973c/hrefYou may notice that we’re passing an IGTraitCollection into IGColorWithDynamicProvider's block instead of a UITraitCollection. We introduced IGTraitCollection as a struct that contain's UITraitCollection's userInterfaceStyle and userInterfaceLevel values as isLight and isElevated respectively since those properties are only available when linked with newer iOS versions. More on that later.

Now that we have IGColorWithDynamicProvider we can use it everywhere in the app where we need to use dynamic colors. Developers can use this freely without worrying about build failures or run time crashes regardless of what version of Xcode they or their peers are using. Instagram has historically had a semantic color palette that was introduced in our 2016 redesign, and we collaborated with our design systems team to update all the colors in it to support dark mode using IGColorWithDynamicProvider. Here’s an example of one of these colors.https://medium.com/media/31d626b67b367b7dbeadfcf758c9cade/hrefOnce we had this pattern defined for wrapping UIKit’s API we continued to add more as they were needed. The set we ended up with is:IGColorWithDynamicProvider as shown hereIGImageWithDynamicProvider for creating “dynamic images“ that automatically adapt to dark mode.IGActivityIndicator functions for creating activity indicators with styles that work in light mode, dark mode, and older versions of iOS.IGSetOverrideUserInterfaceStyle for forcing views or view controllers into particular interface styles.IGSetOverrideElevationLevel for forcing view controllers into particular elevation levels.Small side note: We discovered towards the end of our dark mode adoption that our implementation of dynamic colors had equality implications because a new instance of UIColor was returned each time and the only thing that was comparable about each was the block passed in. In order to resolve this we modified our API slightly to create single instances of each of semantic colors so that they were comparable. Doing something like dispatch_once-ing your semantic colors or using asset catalog-based colors and +colorNamed: will produce comparable colors if your app is sensitive to color equality.Fake Dark ModeOne tricky thing when adopting technologies in iOS betas is getting adequate test coverage. Convincing people using the internal build of Instagram to install iOS 13 on their devices isn’t a great idea because it’s unstable and challenging to help set up, and even if we were to get people testing on iOS 13 the builds we distribute internally were still largely being linked against the iOS 12 SDK so the changes wouldn’t show up anyway.I briefly touched on our IGTraitCollection wrapper for UITraitCollection that came in handy in the course of building out dark mode. One clever testing trick this IGTraitCollection wrapper afforded us is something we’ve come to call “fake dark mode” — which is an internal setting that overrides IGTraitCollection to become dark even in iOS 12! Nate Stedman, one of our iOS engineers in New York, came up with this setting when we were first working on dark mode.Our internal menu’s “fake dark mode” option, and fake dark mode running in a build linked against the iOS 12 SDK.Our API for generating IGTraitCollections from UITraitCollections came to look like this.https://medium.com/media/c7579547af72ddf98cada4471aa34688/hrefWhere _IGIsDarkModeDebugEnabled is backed by an NSUserDefaults flag for fake dark mode. There are of course some limitations with faking out dark mode in iOS 12, most notablyuserInterfaceLevel isn’t available in iOS 12, so “elevated“ dynamic colors never appear in fake dark mode.Forcing particular styles via our -setOverrideInterfaceStyle: wrapper has no effect in fake dark mode.UIKit components that use their default colors don’t adapt to fake dark mode in iOS 12 since they have no knowledge of dark mode.With this addition to our dark mode wrappers we were able to get much broader test coverage than we otherwise would have.ConclusionDark mode has been a highly requested featured of ours for quite a while.A recent public Q&A with Adam Mosseri, head of InstagramWe had been a little reluctant in introducing dark mode in the past because it would’ve been a tremendous undertaking, but the excellent tools that Apple provides and their emphasis on dark mode in iOS 13 finally made it possible for us! Of course the actual implementation still wasn’t easy, we’ve been working on this since WWDC and it demanded ample design and engineering deep dives into every part of the app (and admittedly, we have probably missed some). This journey has been worth it, on top of the benefits dark mode provides such as eye strain reduction and battery savings, it makes our app look right at home on iOS 13!A huge thank you to Jeremy Lawrence, Nate Stedman, Cameron Roth, Ryan Olson, Garrett Olinger, Paula Guzman, Héctor Ramos, Aaron Pang, and numerous others who contributed to our efforts to adopt dark mode. Dark mode is also available in Instagram for Android.If you want to learn more about this work or are interested joining one of our engineering teams, please visit our careers page, follow us on Facebook or on Twitter.Implementing Dark Mode in iOS 13 was originally published in Instagram Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.