Developing the Twitter PWA

The most recent version of Twitter’s web app, Twitter Lite, has just been released. It’s a Progressive Web App, which is fast and responsive, uses less data, takes up less storage space, and supports push notifications and offline use in modern browsers.

In this session, Zero Cho from the Twitter Lite team will give us a sense of the architecture of Twitter Lite, the technologies used, and lessons learned in building this most recent version of one of the world’s most widely used web services.

(upbeat techno music) (applause) – Thank you everyone.

My name is Zero and today I’m going to tell you about the stories and decisions that we’ve made behind when we were building Twitter Lite. A little bit about myself first, I come from this little island country called Taiwan and then I flew to San Francisco and started working at Twitter over there.

I’m currently as a senior software engineer on Twitter Lite team and my Twitter handle, which is really important because you’re all going to follow me after this, is @itszero.

Before we actually get started into how do we actually build Twitter Lite, I want you to meet the team behind Twitter Lite because it’s really easy for everyone else at the company to think about that.

Twitter is a really large company, have like 1,000 engineers, so they must have like 1,000 people working on this project so it should be really easy to build this thing in a week but no.

What we had from the beginning is three engineers and two designers and then we grew the team up over the two years that we imbued in this project to decide, which is 13 people on the team.

We are still light.

Still a really small team trying to build fast, trying to (mumbles) on a product.

Today’s agenda will be, we’re starting with three parts.

The first one will be why.

What is Twitter Lite and why do we want to rebuild it? And finally the history of Twitter’s mobile web site, which leading to why do we want to rebuild this? Then we talk about the how.

Talking about some design approaches that we took, talking about some consolidation that we took. Some architectural details about how do we build Twitter Lite? Finally I’m gonna talk about what did we actually build? What this feels like using a Native app.

What does it feel like using a progressive web app? How do we do some extra things to managing DOM elements to keep the performance really good on mobile devices? What Twitter Lite is, Twitter’s mobile website which you can access at m.twitter.com if you pull out your phone and just go there.

It is a progressive web app with design.

If I’m going up I want to take advantage of all the capabilities that the modern browsers couldn’t provide and Twitter Lite is optimised for data usage because we want to broaden our audiences who want to reach those people who don’t have access to the fast and stable Internet like we do here. Twitter Lite has been working for about two years and during the time we launched this in phases. We started with log in users and then we’re on to log out version of it and then we launch the whole product with some additional feature, like data-saver feature back in April.

Since then Twitter Lite has become Twitter’s fastest growing app, We have 85% more visits to our home timeline, we’ve got 80% more original tweets, that’s composed on Twitter Lite.

We’ve got a million more app launches per day and since we supported push notifications, we’ve been sending out 10 million push notifications a day. Those all are really good numbers and we are really excited about this product as well. If we rewind the time, why remake it? It basically boils down to the three points. The first one is the device and browser capabilities has been completely changed since say 2008. If you think about that particular year, that’s when the first Android phone, the T-Mobile G1, came into the market and the year before that is when the iPhone came into the market.

We are witnessing a transition from a functional phone, a feature phone, moderated market which runs really simple text based browser which nobody really uses (mumbles) To a smartphone majority of the market now where the browser on your phone is basically a desktop class browser.

Everything has been changed.

We need to rethink what we can do on browsers. We had four different web stacks at the time to support all the devices that we intend to support and three of them actually are mobile websites. We want to think about how can we consolidate the tactic that we have? Lastly we want a better serving market with slow intermittent network.

This is the feature phone version of a website.

You can see that it’s just a really simple HTML pages. It’s fully server side render, it doesn’t really use JavaScript.

You could maximise the compatibility to all the phones out there but we do want to provide our users with better experiences. We have this smartphone version which runs some of the JavaScript and it’s good it’s working, but the design is kind of stuck back in 2012 era and we want to improve on that.

So we did.

We make a new log out version of the website that looks kinda like the app but the thing is that this looks really well because it uses full-service server and then it will explain it is downloaded extremely quick and everything but it doesn’t really do any interactive features.

You can not like a tweet on that, you can not retweet a tweet from that, you can not follow a user on the platform.

If you do that it will ask you to log in and after you’re logged in you get taken back to the original smartphone website that you see there. This creates inconsistency in our user experience and we really don’t want that.

And lastly with our desktop website, our desktop website is imbued with like hybrid of light server rendering. Clients are rendering and users like a in house open source framework of Lite.

He has extensive support of all the features you could ever think of and maybe only one or two people uses today but we still support them.

This is a Holy Grail we want to be eventually. That’s the history of Twitter’s website and we want to talk about how do we actually approach to rebuild this website? If there is only one takeaway from today’s talk, I think that would be thinking like a developer. A website used to be a thing that you only browse for a few minutes and you move on to the next page and everything but now we are actually designing a app even though they are running on a web platform. It could be running in the background task for like six hours or even more.

We need to start thinking about data usage, CPU usage, or better usage kind of thing.

It’s not a thing that only a developer has been thinking about.

There are a lot of problems that a developer has been experiencing already like layout, different things on the screen or thinking about API compatibility kind of stuff which brings to this slide.

The Android developer has been dealing with this for a really long time.

They have new controls coming out say like Android 6.0 or 7.0 or they target SDKs to like 4.0.

We do have the same problem on the web platform as well like service worker.

Although the support looks pretty good but even in Chrome in different particular version of the Chrome they have different support of the service worker because the problems has been changing all the time. You will be like this method if available on this event and is on this particular version of the Chrome but not in the newer version of the Chrome. We need to be really careful about poking to see what feature actually supported on this browser and how can we actually use them.

All the big apps on your phone have this thing called HTTPClient clan or network manager kind of thing that managers of the network request.

The corresponding cash in or retwine mechanism. We wanted the same thing for our atmosphere because it was becoming more complicated and this is the same (mumbles) server side and client side and we have different requirements on how do we construct HTTPClients in both cases? We borrow the idea from our servers at the implementation which we model every services.

Then we remote code basically as a function that takes in the request and give you back a response. If you’re talking about actual app requirements even if you guys wanna add through the process, if you are doing a post request.

Post request one as CSRF, authentication (mumbles) to it.

Want to do some application header, wanna do some error handling, and finally wanna encapsule that the retry logic into the client library so we don’t have to deal with any application logic.

We try to model this as a filter.

Filter basically is a function that take a request and a service.

Service things like next method in the chain and then gives you back a response. To give you a more concrete idea, the code kinda looked like this for the error reporting filter which you take a request and a service and can do what a vacation you wanted a request before it passed into the service and then you actually put it into service.

Goes down the chain to do more like filter or actual request.

Then you can do (mumbles) because we are using JavaScript promise.

You catch a row and you check if the error you wanted (mumbles) and you report it if it is. If not you just reject it.

It can go down the (mumbles) The application can get the responses back. That is about APIs and the next thing I wanna talk about is the design.

Design is a thing that we wanna solve from the beginning of this team.

When we started with 302 designers, we talked to each other a lot.

There are a lot of things that is not shared with all language.

The (mumbles) has been traditionally been solved by either having a designer learn older (mumbles) or since it’s jargon or have the engineer dedicated to doing a prototype for everything.

This still causes problem because now when you are talking about the same thing there are two (mumbles) for every two.

One is original spec and one is a prototype and they may have different bugs and everything. They don’t look exactly the same.

The idea that we come up with is that we start doing design using a component-based approach. To design a suite of elaborate components that we could just reuse.

By doing this obstruction it makes way much more easier for designer engineer to talk about the same thing because now we have a word for it.

It’s easier to create, to enhance the consistency throughout the app as well. Whether we’re using the same components that look exactly like the same on different screens. These are a few examples of them.

This one is a top app bar which has a left control, there’s title, there’s search box on there so right control. When we talk about it we say, I don’t want to left control but I do want the search part in or the title should be a search box or something.

It’s easier to use orders to describe the state the component will be in.

There’s (mumbles) there’s buttons for different purposes as well. This does not only affect our design process it also actually have implication in our engineering process as well.

Before this traditional website usually runs on Scala if it can.

We use Scala views like servers and logic.

We have JavaScript for clients of logic.

Finally we have HTML templates that runs on both servers and clients so we can render them as we’re needed.

Out of those three files we still need a CSS to do the styling.

After we just use JSS to encapsule that over the service of logic, clients of logic and template are into one file. We still use CSS but we use web packages that’s loaded to alleviate some of the untraditional system problem like class in duplication.

Whenever we say .root and AppBar it would just (mumbles) render numbers to it and prefix it (mumbles) so it wouldn’t ever duplicate anywhere else in the app. The most important thing is that although we still have two files, we distributed them in a folder so it’s like a component. Whenever we need to share the component with our internal team which is like, copy this whole thing over and package it in a library and have that included.

This makes life sharing a component sharing with a development progress among the different teams way much more easier.

For every app out there there’s also a background threat or process running.

They can manage any time you push notifications and everything.

In progressive web apps term, there’s things called service worker.

It’s essentially a isolated background JavaScript execution environment. We use it to catch our (mumbles) Our repeat users wouldn’t have to download everything again they would just be served on their disc cache.

We also like caches like top Emojis.

People use them or just a lot.

When you seem to manage to put you into vacation, we use it to intercept network request to have some basic offline API retry support as well.

On to (mumbles) It is a full JavaScript app from back end all the way to front end and we round no jestserver with express (mumbles) work.

We use React that’s our view library.

We ran a router on top of it from routing and really that’s for state management and finally Jest and WebdriverIO for you need functional testing. All JavaScript knows that this is not the complete list or the package we’re using.

It’s just a highlight.

It actually looks like this when it’s actually working.

We always talk to api.twitter.com so we can console that whole logic around how do we access different services onto one HTTP service.

We share the Redux code between our server’s and client’s eye but we don’t really render on the server side.

This is like the Holy Grail of JavaScript.

Every time we talk about JavaScript app people ask about, do you do isomorphic (mumbles) Do you do (mumbles) rendering? We kinda do but not exactly.

The idea is that we wanted to put more emphasis on time to interactivity.

We won’t use it to be able to enter with account right after they see it.

To do this, what we would want to do is help out, boot up as much as possible.

Render meaningful information as fast as possible. What we do is that we prefetch everything on the server by sharing the Redux (mumbles) We define certain promises that we wanted to complete before they go into rendering.

We can fetch all the experimental data, we can fetch settings, we can fetch the current login users and then we collect them all together and we serialise the Redux state and render it in the HTTML show that we (mumbles) to the user.

Whenever the browser is ready to load all the JavaScript and it’s ready to put up the application then we have all the data in the memory all ready.

They don’t have to make external network requests which has a lot of HTTPS overhead kind of stuff. These steps are a lot of the time.

Now that we have some tools to help us measure in production because development (mumbles) have different, but so much characteristic, we actually have a different development mode and production mode that routes different check and everything.

You really want to check everything in production again after you do your all due diligence in the develop mode. This is one the example that since we target developing market we really want to mindful about the CPU data usage.

I spent a lot of time just analysing how production (mumbles) and the science of that or how browser interact with that with help from the Google Chrome team.

In this particular example we shrinked at loading a start up time of 5.27 seconds to just three seconds just by improving our (mumbles) in logic.

There is a blog post by our awesome teammate (mumbles) Armstrong about all optimization that we do if you just go over to his website. Outside of our own investigation, it is just simply impossible to test out all the smartphones out there.

There are like thousands of Android phone, there are different generation of iPhone and a lot of other phones as well.

We really really want feedback from the field but it’s also unsustainable to actually ask user to do a survey or ask them to report a problem and feedback.

They are just not gonna do that because all they wanted is to browse through there.

They are not interested in giving you feedback. To solve this problem, the way that we took is that we just run measurement in production in a pragmatic way.

We made your time to first render, we made your time to first tweet, time to first interactivity, and we reported them back to our server site so we can plot them in a graph so that we can monitor them to see how it goes.

Also I have those regular (mumbles) number there are a few things that we are measuring as well especially react’s lifecycle.

We have this lifecycle like mounting a component, I’m mounting components and render times and everything. We kept to those number and we send it back to the server. We can be real original about each time we deploy does it get better? Does it get worse? Should we really be thinking about this? Is it over to allow me for a show already? We can monitor this way much more in a automatic way and properly.

We also use (mumbles) to collect the production JavaScript errors.

It really scared us when we first turned it on for the first time because we see a lot of areas coming in.

We have no idea those things happen.

You really just have no visibility of how you’re (mumbles) Because nobody tells you anything.

This is a really really helpful tool that we have. What did we build exactly? This is how it looks like.

We just open it and now we can see we browse around and do things.

One of them is the Android app and one of them is the PWAF and they feel kina the same.

The feeling right through Lite from the left of Android except it’s not, it’s actually this.

(mumbles) right is actually two for Android and Twitter Lite’s on the left but they feel kinda the same.

It’s really hard to differentiate that.

I think we do a pretty good job there.

To reach this performance there are a lot of things that we have to do to be able to sustain this performance.

This is just how too it looks like conceptually. It’s basically an infinite timeline of a lot of tweets. Usually we’re starting browsing those tweets 10 lines. I just scroll down and up and you can start seeing dumb elements starting to accumulate.

If they go down to the 10 pages there may be like 100 tweets in it.

A tweet can be really really complicated because we have to support Lite.

Emoji’s (mumbles) Support cars, there’s images that could be in live videos. They’re all kind of things that can pop up in a tweet. On a desktop browser this would be fine because they run really fast anyway but mobile devices has limited CPU and battery power. We cannot just letting you just grow unlimitedly.

It was just dragging down the app and make everything so slow.

We have to start removing the things that goes out of the view (mumbles) on its own. The tough thing is actually completely out of view (mumbles) to remove it but not that guy at the bottom because it has a tiny bit of header in the viewport.

You really need to do this very carefully.

You are using what (mumbles) popping in and out very obviously.

This is something that all the native, I was Android-ui-toolkit already have this built into their system but we have to rebuild this on the web to maintain our performance.

This is hard as well, this is what two would look like but if you resized the window or rotated a device? Things shrink down text got wrapped until the next line.

Tweet height is not a very deterministic number. Do determine by the count and the (mumbles) This causes a problem.

If you’re somewhere in the middle of a timeline and you wrote that your devices is up there, height changes and the height change will actually accumulate over the itemising of the timeline and usually you’ll be like, where am I now again? We really need to find a way to make this persistent whenever you are resizing the window or we’re taking your devices like crazy and just do everything possible.

What we do is that we’ll try to find out to do a Lite (mumbles) or find the best tweet on your screen and just keep it always in a square no matter what. We can see that obviously resizing this screen. The first three is always in the screen no matter how loud your house motor gets.

By using the same anchoring mechanism we can also provide a feature where a tonne of position when you allow them to get away from the page and come back to this page. You all will be like, go in a tweety tale and go back and be like, why am I at the top of the timeline? Where was I? This is a little detail really has helped enhance this users experiences.

Like we had previously discussed, one of the major reasons that we reworked the website is to reach every person on the planet.

This is one of the core values that we have at Twitter that we really care about.

We really want to let more people to come on the platform to be able to get all the news and talk about things happening in the world.

Slow Internet still dominates the world.

If you look at the number there’s 32% of the 3G user. There are 47% with 2G users.

It’s actually more than 1/2 of your potential user on a really slow or like (mumbles) network it will have a stable connection.

On top of that, many people in the development market also don’t use (mumbles) data plan as well.

They’re on prepaid data plan because they cannot afford to do so which makes you really want to think about how much data your app is using on their phone. To do this we started with our own app sites.

Our app is like (mumbles) smaller than three different Androids but this is of course not a fair comparison because (mumbles) app and the browser app have a lot different needs and execution environments so they have to pack it with different things to get into the app but in this case, when we want to reach more users this really matters. We do take advantage of the amount of browsers ubiquity and capability.

After that we started looking, what else could we do to reduce the number of data we are using? This is why we started working on this data saving feature which started by removing the most expensive resources which is images.

Instead of automatically download them in your timeline as you scroll by.

(mumbles) To download a thumbnail and a thumbnail is smaller than one KB using a WebP format where our regular images are at beyond 20K or 100K ball park.

There are a few unintended side effects that’s really useful to us.

By spending less time downloading those fake images or really wasting those we actually now have more time to download actual API responses.

When your users request more data by swiping down, downloading more timelines, it actually reacts very much smoother and faster as well. But there is one thing that’s really important, that we don’t want to strip our users of decision. We want them to be able to see what they are going to get and we tell them what’s the cost of downloading this image and we have them decide exactly if they wanna download the image or not.

This empowers them to make the decision rather than us making the decision for them.

We have received a lot of positive feedback on these features.

Also a small thing which is that, when we have the preview images loaded already and you just click the image button.

They will be like, is this thing loading or not because it looks blurry but I don’t see the full image coming in? We add this really really small loading indicator at the top.

It’s just like a small little line swiping by but on the phone you will see that, okay I know this is working.

I know my network is not that great but it is still downloading in the background.

This gives them a real really good feedback. They could just never get a wait if they don’t wanna wait anymore.

Instead of leaving there and you’re like, I’m sure (mumbles) I don’t know if this app is still working.

These are a few feedbacks that we got from our customers. Twitter Lite is the best product from Twitter so far. I like it.

This is from Indonesia one of our target markets and this one’s from Philippines which is oh so wonderful are on target market for Twitter Lite.

Finally we hope people actually using the 2G network as well which validates that the decision that we made, the simulation that we tried, actually works on a real network as well.

These are all the really happy reactions that we get and we are really happy to see those as well.

Outside of the (mumbles) mode we also do other things to try to reduce the load time.

One of the things that we can do is through code and bundle splitting.

There are a few topics that I wanna hit but we are doing this kind of thing.

The first one will be stable bundles because your app is large and if you have profile bundles or direct message bundles you don’t want that whenever you change one thing and other things are gonna change as well.

You want a bundle only being effected by the code you’re actually changing it in.

Whenever we deploy a new version you don’t have to download older app just to get the one single module that you have updated.

The second thing is lazy loading features.

You really only want to load the code when it becomes relevant.

You don’t want them to download a keyboard bundle when the device doesn’t even have a keyboard. You don’t want to have to download direct messages unless they’re interested in viewing direct messages. The next thing is that we want a scalable code-splitting. A lot of code-splitting can be done by manually deciding what goes to which bundle but this is not very scalable way to do it because your app can grow, your (mumbles) grows and people just lost track about what goes into which bundle.

We want it to be really really automatic and less manual intervention.

Finally what we did is, we basically do route-based bundles.

If you go to /itszero, which is my profile, you get the profile bundle.

If you go to /messages you get direct messages bundle. That way it’s really easy in a code to differentiate different bundle like where does it start and what does it include? And you don’t have to write another configuration file just for this purpose as well.

To help you to be more vigilant about the bundle sense we also wrote a script that generates a report comparing your (mumbles) to master (mumbles) to see exactly what bundle has been changed and what’s the size difference there and we include that in the code review.

If you can come in and say that, I think this belongs to a different bundle or I don’t think this feature is worth the size that we are adding.

This really helps people to review your actual code, review your code size changes and the outside of your (mumbles) as well.

I want to talk about globalisation.

Globalisation is actually a really big topic.

It’s not really as simple as just adding and translating all the sentences you have in your app.

Especially what we think about is there is a black box, this magical black box, and you’re just putting whatever your original language is and then you can tell them that what is the target? Look how it is and you get the translated version of it. There are a lot of niche details around this topic as well. There’s this thing and HTTP had a code, Accept-Language.

This is supported by most browsers already you just don’t see it because nobody really look into every HTTP (mumbles) browser mate. When you are configuring the language on your computer like, say I won tennis Chinese and I won English in the U.S.

I think your browser built up a preference list. In this case it’s the same thing.

I really want Chinese in tennis region and we want English in U.S.

I’m okay with regular English if you don’t it’s the best spot where it comes from but I really don’t want regular Chinese from out of region.

These are only little things in each market. In China’s cases, if you showed simplified financing to tennis people, it is a really offensive way to drive away your users. We really want to be careful about this.

This is one way you can get the luxury preferences from your user.

There is more than just translation as well. On the right side you get the Edward Snowden’s tweet in English.

On the left side you got December and June but in Taiwanese and you can notice that our English version is at 31.5K.

On the left side you said 3.1, about 10k.

That’s because look, we count numbers just in 10ks.

If you say 31.5k which we can read but really it’s just weird.

Usually we just don’t feel like they’re at home. Fortunately for that you don’t have to know all the initial detail to yourself.

There is this thing called the (mumbles) from the (mumbles) consortium.

You can just use (mumbles) or this translation formatting for you and he has a lot of lists of country names and everything that’s properly translated already. This is something you definitely should think about when you’re trying to globalise your app.

Finally there are things we are adopting as well. We are looking to GraphQL as well because it allows you to issue a way much more precise request. Instead of keep asking the server to give you a full user for a tweet object.

In some cases we already have some partial data like old data in it.

We just need an updated February counts we should count up in a thing.

Using GraphQL can help us with, each of them weigh much more like batch codes and precise codes.

We are looking to that right now.

The next thing is a framework called React Native Web which is made by our tech lead Nicholas Galagher. What this thing does is that it moves all the (mumbles) onto the web.

We could use view instead of diff and that really helps you because it’s abstract to dumb outside everyday apps so in the future if you want to share your components with the React Native Web, it’s way much more easier to do that way with this library. This thing on the web also uses select bugs by default.

You can do, lay out much more easy and swift.

You don’t have to think about the pixels or the time.

You could let the (mumbles) to decide how to grow, how to strain your components on just different screen sizes as well.

Finally we are also investigating in full offline support. This is something that is requested by the developing market users a lot.

Like YouTube can be downloaded and be offline later in development market.

Facebook actually works pretty well in this scenario as well.

Could just cut off your Internet connection. It can still browse, there’s items in the feed.

You can click into their instant article and can still read a whole article even though you’re not online.

This is something we definitely want to get into and investigate.

How can we implement it on a full web platform as well. Some story is best told visually so I prepared a visualisation of our good repository here.

It looks pretty cool.

You can see that people actually moves around or to part of areas that we have.

They are not only staying in the same small area all the time.

I feel this is really why our engineer cares about a product.

He’s definitely seen everything in a product. I really cared about, I wanted to make the code appear better as well (mumbles) Data better as well.

(mumbles) can work a lot easier on this one as well and wherever you see a big boom in the app that’s like a reflection happening.

You want to do (mumbles) really often and really slow. It’s easier for your app to move onward and removing all the typed up (mumbles) left behind.

This is like big boom right? It’s pretty.

Just think about that next time when you’re up doing the (mumbles) It’s kinda that extra motivation for doing (mumbles) It’s not just making the code prettier.

It’s not just making it about people’s (mumbles) There is also a pretty boom happening in the background as well.

(laughter) This is it.

(applause) (upbeat techno music)