Opening title Alicia Sedlock has come all the way from Boston and she and I had a lovely chat the other day. If you haven't noticed I've been doing these video interviews with speakers at our conference so there's half a dozen at the Web Direction site with Ethan Marcotte and Karen McGrane and Sara Soueidan, and all these people who came and spoke at Respond. And I do the same on Sunday in between the two conferences.
And Alicia and I sort of started talking, we realized that there's, there's an interesting that northeast of England, England, northeast of the United States Boston and Rhode Island and then New York and Pennsylvania, a lot of the folks who kind of been around doing web things for a long time, kind of that's where they live.
There's Dan Cederholm, Ethan Marcotte, Karen McGrane, Jeff Zeldman and there's a whole bunch of folks.
Brad Frost so it's not all just really old people. Living in that area and that kind of traditional idea of webiness and the principles around the web, that seems to be the kind of almost the epicenter of that in some ways.
We were talking about kind of the comparison with perhaps around Silicon Valley and the Bay Area where those sort of, the webiness and concerns around the web in that way seem to be less, less strong, right? The web is more simply a mechanism for delivering solutions or rather than a thing that we can't fundamentally (mumbles), we've lived and breathed for many years.
Now obviously it's a simplification but it was an interesting observation.
She's from right up there where a bunch of great folks I just mentioned come from.
So she came all the way down here for which we thank her.
She works at a really interesting company called the Society for Grownups or of grown.
Was it for or of? The Society of Grownups which is basically a place where you can kind of learn to grow up a bit around your finances and managing those sorts of grown up adult things.
I think it's a really interesting idea, a great business and it was one of the many reasons why I was interested in getting her come speak here because she works at a really interesting-looking place. To talk to us about all matters testing on the front end, would you please welcome all the way from, as I mentioned, Boston, Alicia Sedlock.
(clapping) - Good morning, everybody.
How's everyone doing? That was a great introduction, thank you John. For those of you who were out last night, you're probably kind of tired and cranky and grumpy, and maybe feeling a bit like this this morning. I found when traveling people get really amused when they find out that hedgehog ownership in the United States is legal, and so, I like to highlight that.
This is my hedgehog Maple.
You'll see a bit of her.
If you follow me on Twitter you will also see a lot of her. She hangs out with me a bunch, helps me with code reviews.
As John said, I work for Society of Grownups. We are a financial education company and I'll talk a bit more like the interesting challenges that we're facing there.
I teach with Girl Develop It up in Boston, I'm not sure if they're here in Australia but they're a non-profit that teaches women how to code. And I also started recently a newsletter called Hedgehogs and Reflogs which is a combination of hedgehog facts and cool developer know-how tips, things going on in the industry.
So if you want both of those things you can sign up for that and you'll have a great inbox whenever I decide to update it.
Finance in the States is not really a trusted industry for lots of reasons.
And so, we're sort of trying to change that at Society of Grownups and really building trust in saying that hey, you can trust some financial services places. We're what's called a registered investment advisor so we are legally obligated to act in the best interest of our grownups which is really exciting.
There's not a lot of places that do that.
And as we're thinking about the trust in like the physical service we have and building trust around like the financial aspects of it, we're thinking a lot too about how do we build and maintain trust in our digital space as we're building out digital solutions that can reach all over the world.
And part of that is making sure you're not delivering things that are broken, that's a good first step we have found.
Imagine getting a car and you get in and it just doesn't work right off the bat, you're probably not gonna trust whoever made that car in the future.
We don't, we have a high risk there, right? Where kind of as a startup right out the gate if things are broken, we're already kind of nixing that trust.
That's something users said.
They don't want broken products, no one does. From a business perspective as well, whenever you're working with QA teams or like business stakeholders and things like that, when things are constantly breaking they sort of start giving you the side eye, and they're like what's going over there? Are they actually working or they're just kind of flowing around not really taking responsibility and accountability for their work? And then thinking about how much time we spent fixing the things that we keep breaking over and over and over again.
I remember the very first single page app I ever built not having tests and how many times I had to go and fix the same things over and over again because at the time I just didn't know any better. And so we have this great way to avoid these kind of things, right? This front end testing thing.
Testing's fun to talk about because no one's really passionate about it. You're not like you have people who are talking about like animation and you have grid layout, flex box, all these cool stuff coming out and people are like yeah.
And then you talk about testing and everyone's like, hmm.
It's not the most exciting topic but it's really, really important.
There's a really large breath up here and so, I started using the first front end testing to kind of cover this really large scale of things that it involves.
And so, because people have definitions I sort of have this vaguely working definition of what this is.
And essentially front end testing is the combination of techniques that hold us accountable for delivering software that works in whatever kind of form that takes.
Whether or not you're building a react thing or not. Maybe you just have a website, there are still things that break.
And we have lots and lots of lots of ways to make sure that we are delivering quality code. Testing's not new.
Testing's been around for a while.
Whenever I talk with backend developers, testing is just kind of like a given at this point for them. I don't think I've really worked anywhere with even when I was consulting that any backend system didn't have tests.
They take it very seriously and that's where I originally learned a lot of this as we started moving to your backbones and your angulars and stuff like that.
Because I've seen things in the wild break all the time that it's like if you had a test around this maybe it wouldn't have broken.
I think there is a fun thing with a recent like Chrome release where webpages wouldn't load whatsoever.
And it's like is there not a test (laughs) to make sure that it loads webpages is what you do? Important to note is that I'm not here to tell you a specific way to implement this.
There's a lot here, there's a lot of libraries and other things available for this.
They all have their pluses and minuses.
Probably by the end of this you kind of have a firm grasp of what's out there and able to kind of see given your team and your situation what might be good for you. There is no one right way to do testing.
It's pretty fluid which is pretty great so just keep that in mind.
The other thing too with testing that is kind of a mindset you have to get into is that when you're writing test you're really, for the most part focused on the outcome and not what goes into it.
Rather than testing for like how you go about kind of like delivering a solution, it's more like we know what the solution needs to be. Let's make sure it's always there.
It's kind of like with building out the front end and implementing designs, you don't necessarily, the designers don't necessarily care if you're using one technique over another, it just has look the way that they designed it. It's kind of like that, thinking about it like that. A lot of the tools you're gonna kind of briefly see today are, they don't have to be part of some big automated build process of yours but it really really helps.
Because if you don't really have to think about running them as a side thing during development, you and your team are gonna be way more likely to keep up with it.
And to make sure you're running those tests and like having it really be just an integrated part of your development cycle and not some kind of side thought afterwards. That's all great setting stage.
What are kind of the components of front end testing? We have these six areas and some of them are new and some of them are tried and true at this point.
We're gonna start with those our good friends unit, integration and acceptance testing.
All have very exciting names.
These have, again, like they've been around for a while. They've sort of been proven out, figured out and now it's just our job to start building them in our applications.
Let's start with unit testing, the lowest level that you can really get.
Unit testing is really making sure that the smallest kind of testable pieces of your application which we refer to as units, always work kind of independently.
We're talking about things like validations, things like calculations.
Very kind of small, isolated functions, pieces of code so that when you go and use them somewhere else you know specifically if that's still working. We're gonna check out the anatomy of a unit test. We're using this calculator here, financial services, sorry.
Let's kind of break this down.
The first thing, this is using a tool called JASMINE.
I have a couple different tools just so you can compare it like syntax, how you go about it and things like that.
The first thing you do in all these, no matter what you're doing is setting up like your group of test, your test suite. We kind of like to group logical, kind of logical groups of things together like this. For a calculator you might want to have test all around the actual operations that it performs. And then you get into the actual test.
What is the behavior or output that you're expecting to always be true? Let's take the example of adding two numbers together, that's something a calculator should do pretty easily. Then we actually will run through our code and see if our expected output is actually what we get. In here, we're starting up our calculator hypothetically and we are going to call this add numbers function with seven and three.
The powers of the universe tell us that should be 10 and so you would always want to then check like if you're checking for a value that you are actually getting that.
If we somehow screw up how we add numbers down the line, this won't work and that's great to know.
And then that's sort of like all of it together. Logical groups, specific with functionality and then running through that and making sure it checks out the way that you really want it to. There are some other things that unit test usually involve and some of these other testing types as well. Running repeated functions over and over again so if we need to do any kind of setup like initializing we can do that kind of once and be done and not have to like change it for every single test. Same thing with tearing down, it's really just organization and clean up stuff that's good to know that's there so you're not kind of training and writing the same thing over and over again. And then the other kind of big thing with unit testing is you're not always able to check for specific values. When you can that's great but that's not always the case. And so, maybe we need to check for when things are actually just kind of happening. There's a concept called spies that will watch specific functions and tell you when they get called, and like what they're called with, so let's take a look at that.
We say we have to test that calculator.
Usually they kind of remember the last result that they calculated so let's remember that. We give it a function, we say hey, we want you to watch on this.
And then we're going to add our numbers.
We're gonna do some other calculation and then we're gonna say hey, that thing that we told you to spy on, it should have been called now.
And if it isn't then that's a failure.
You can also check that things are called their certain values and then you can, you know, kind of mix and match too so we can still check for specific values being updated but we can also check for things that aren't as direct. I have found that unit test pair pretty well with like functional requirements.
Specifically, this should on a technical level do like X, Y and Z, or when thinking about how for us how loan repayment happens.
There's a very specific way that happens.
And so, having test around that to make sure it's always there and tried and true is really helpful.
A lot of these tools, again, a lot of these tools depending on your stack you can kind of choose whichever ones you want. Some of these are like add-ons like Chai is a assertion library for Mocha. JASMINE and Mocha are the ones I see a lot but depending on what you're working with you might be shoehorned.
We work in Ember at Society of Grownups and so we're sort of stuck with QUnit.
I think you can do like JASMINE with Ember but Ember comes with QUnit out of the box and so it's really easy for us to stick with that. You might want to just do some research of like testing angular with Mocha or JASMINE and kind of see where the pros and cons are there. And there's like way more to it.
There's tons coming out all the time as the web does so I'm sure by the end of this there's gonna be news ones so look out for those. Taking the level up, we then have integration tests. And so, this is really the idea that even though we're testing these small pieces in isolation doesn't mean that they're gonna work well together. And so we want to make sure that things are playing nicely when they can.
Compared to unit test where it's kind of like we have an input and then we have an output. It's more of a linear kind of relationship. Integrations are saying hey, you might have one input that then has like lots of side effects outward. Or you have several different types of input that then come to a single output.
There are, the scope of this kind of increases. It tends a lot to do I have found specifically with front end applications that when you have logic that updates something in your markup or something like that, these are really good to make sure that's occurring as expected.
This is using QUnit.
This is actually a test that we have in something we're building right now.
We can walk through this.
Again, you can see the syntax itself like the words are a bit different.
Like instead of it should it's test but the anatomy is still there which makes it easy jumping around from like different types of testing libraries as well. You can kind of get in the habit or kind of expect that it's gonna be setup a certain way. This is a progress bar that we have on a multi step kind of like educational thing.
And so, we have a progress bar that let's you know how far you've gotten through.
This takes a variety of different inputs, right? You can be in the progress of a lesson or perhaps like through your sign up flow.
And so, we want to make sure that whatever input it gets it still renders the same way.
And so, in this example, we're doing some setup setting like what step are on and things like that. We're gonna render out the progress bar itself and then we're going to assert that it's rendering with a class that we expect or in this case not.
You can also test for negatives in certain cases if that makes sense.
For here it makes sense for us because we want to make sure some thing wasn't rendered a certain way.
We check that and no steps are completed because we're on the first step therefore you haven't completed anything.
And then we go on, we update the step and then we expect the UI to change.
Now we're expecting that we should have a step marked as complete.
There's a bit more moving pieces here but still kind of concise in scope like this is one segment.
It's kind of like this middle between unit and acceptance testing which brings us to acceptance testing For those who have like dedicated QA teams, acceptance testing is sometimes something they do as well in an automated fashion.
So if you have like automation engineers on your QA team, they're likely writing acceptance tests for Selenium is a big thing that I've seen.
But we can do it too and it's good to do because we should be testing our stuff before we send to QA. Pretty similar setup but we're gonna, this is very high level now.
We're making sure that whole flows are still achievable and not really getting so much into the weeds, but we're making sure that all these units and integrations are all kind of cohesively working together. Let's break this down.
We're gonna go to our homepage, we're going to try to fill out a sign up form. We're gonna give it some false information and we're going to make sure that an error shows up, essentially you can't go on.
You visit the homepage and then we're gonna fill in this form that we have. We fill in the input, we click this button.
And the whole thing is like, this is running on what's called a headless browser. It's a browser that's running on your server that you don't actually see.
For the most part Selenium has a thing where you can actually pop up actual browsers on your computer which is a lot of fun to watch if you get a chance because it's just like some robot going through your computer and doing stuff. Yeah, we're gonna fill this out information that should trigger a falsehood and then we check that.
We're gonna see if our error shows up.
Now we don't care about the validation, right? We don't care about what's going on here, how this is being achieved, we just want to know it's achieved.
It's very much like a user story in that respect. There's something you want to achieve but the implementation details aren't outlined necessarily in the user store itself.
I want to sign up for a newsletter is not the same as newsletter sign up should only allow like valid email addresses.
This is really the higher level, making sure you can get from point A to point B without anything really exploding, which is good, we don't like explosions.
Some of the unit testing and integration testing tools can be used for this as well.
JASMINE has a specific add-on that kind of makes this stuff work.
And there's again, just tons of these all the time and it's hard to keep track of them all.
But these are sort of the ones that I've been seeing a lot, Nightwatch and Selenium and I think Web Driver too does it as well. Keep an eye out for those, check them out.
See how they run, if it's gonna fit in with your workflow and things like that.
Those are the first three kind of I guess drier parts of testing if you can believe that because I'm sure some of you are just like, oh, this is all dry.
Those are the old school ones that we have. Let's get into new school.
With testing starting, kind of starting off on the server side, they didn't have UI stuff to worry about necessarily. And so, as testing has sort of grown and found its place in the front end stack, we're getting some new stuff that's specific for us because we said, hey we have other problems than just things around logic.
How can we kind of come up with automated ways of making sure those are correct as well? The same kind of ideas for the most part apply. Some of these get kind of interesting.
Visual regression specifically which we'll cover first is kind of the only one that's not objective which kind of lends its own suite of problems which we'll talk about.
But it's also I think one of the most powerful for a lot of reasons and I think a lot. Especially for those who work really closely with designers like this is something that I think they would be on board with as well.
Sometimes you update your CSS or you think you do a really clever refactor and then you realized this one page you never navigate to is just all out of whack, right? Just entirely just kind of screwed up.
This is happening less now that we're sort of adopting atomic design and living the style guides and building things modularly, but it's still a thing that happens.
And so, the technology behind this is like really cool. Essentially what happens is that before you start development at some point, you run a series of screenshots of your webpage, specific pages that you point to.
At the end of development before you merge everything in, you run a second set and then you do an image comparison that gives you as precise as you want kind of like differences to the pixel.
It looks something like this where it's like you can see very specifically even the smallest things that are different. I don't know if you've ever had the situation where designers will be reviewing what you've built and they're like, that button looks like one pixel too far to the right.
Did that changed since last time? In your head you're like no, it didn't but they're like, I don't know.
This gives you like a definitive yes or no to that so it's not like, it's not a question anymore. It's not a gut check.
This is an example from the BBC.
They actually have their, they rolled their own visual regression testing framework that was kind of one of the first in the space which is really cool.
But this is showing like it can even show you differences between like font alias thing.
It gets really precise.
You can make it less precise if you don't care about pixel perfect stuff but if you know you really want to know as soon as anything changes then you have that power. These are pretty simple I think overall compared to some other tests because you're navigating, taking screenshots and moving forward.
This is using something called Phantom CSS which is one of I think the more popular tools for this at this point.
It's the one that gets mentioned the most at least. It runs on something called Casper which is what's running like browser on the server thing. We tell it to go to our homepage and then we take a screenshot and we name it. Much like our acceptance test, we can fiddle around with the UI and change things and take screenshots based on those interactions. We can click a button and we can make sure that we have an error state.
We can fill out some valid information and then take a screenshot of our success state. And then it will take those screenshots, compare them like tell you the differences and things like that.
Important thing to note.
In that example, we were taking specific areas like sign up form and things like that.
You could do full page screenshots if you really wanted to.
What you want to be weary of are situations where say you update the header on your website, that occurs on every single page.
When you run these regression tests that is now a failure on every page of yours. It's gonna be a long time going through those to make sure that's the only thing that was changed. It's really kind of, I don't want to say better but to kind of think about rather than full pages, kind of comparing components.
And I found this works really well if you have some kind of living style guide. And so, being able to just go to your style guide and say, "Hey, all our buttons should look like this." And there should be no regressions there and all our links look like this.
It's definitely not the only thing you want to do just because it's in the style guide and maybe something gets messed up in real life. But it's kind of like a good minimum of like these are the things we used to build and they're always consistent.
Responsive design is the thing people care about these days and visual regression testing can help especially if you don't have say, actual devices to kind of like look at these stuff on. We tend to really focus right on the devices we have and don't really check for the odd sizes a lot of the times. And sure, we can do the browser resize but being able to kind of test that bizarre sizes in an automated fashion could kind of help know that regardless of the view port, your stuff still looks good.
This is kind of the default configuration for Phantom CSS. You would throw this in your grand goal build thing configuration.
And so, when talking about how specific things gets we have this mismatch tolerance so you can say hey, I'm gonna be a little bit more lenient for things, or I want to be very specific.
And tell it where screenshots goes, you give it a view port size.
But what you can do as well is you can specify different view ports.
And so, you can be like hey, I want some really absurdly small thing or some massive thing, or something that is wider than it is tall, maybe taller than it is high.
You start getting really kind of funky dimensions in here to make sure that your responsive design is as resilient as it should be.
And I think that's really, really powerful especially these days.
The thing though right now, at least in my experience, is that getting the work flow right with this is kind of interesting.
With unit testing and acceptance testing and stuff like that, it's a script that you run and it says like, hey, it's an error and no, it's not an error.
With these being a bit more subjective there is no right or wrong.
You might have intended to make a change and so you have to say like yes, this is this. We meant to do that.
Part of the trouble is that like, so you have these screenshots, right? And you have to kind of, you might say in your master branch like, hey, these are our baselines and we keep those. And when something changes, how do you go about updating it and what kind of like warrants that? There's a bit more of a manual effort in a lot of these. We're starting now as people are using these things and finding these problems.
Solutions come out for that.
There's this tool called Percy that I'm totally not biased towards because they're a hedgehog, their logo is a hedgehog but we'll just go with it, and I definitely don't work for them.
I talk about them a lot because I really love this. They're kind of the first I've seen in the space, kind of start treating this differently.
Where there's this kind of like a full on workflow. Rather than manually moving around screenshots and stuff, it's kind of this approval process.
And so, it's a service that you link up with your GitHub and it has this view of showing you what the differences have been.
I mean, you first do it, it's like hey, there are problems. In your Travis you can be like, hey, there's a problem.
You have to go in and review everything and then say, hey, this was intentional or no, this was not, and it gives you a nice little approve button. And I think that's the cleanest kind of workflow for I've seen so far.
The other thing too that they're doing is that rather than taking screenshots which can be a bit kind of processing intensive, they are taking the actual markup and comparing markup, which makes it easier for the responsive testing. You can on the fly test different view ports like very, very easily.
I guess, we're only gonna get feedback on how to make any of these better if people use it.
And so, I think that's another reason that we should really be pulling these things into what we do because that's the only way they're gonna get better. And so, we're starting to see that.
There's definitely change happening so it's really exciting.
As I said, Phantom CSS is kind of like the big guy so Wraith is the one that the BBC built.
Phantom Flow gives you this really cool visualization of all your branch logic if that's your thing. If you want to kind of visually show how complex your application is check out Phantom Flow because it has this really interesting kind of layout. Accessibility testing.
Accessibility is big.
It's a big concern that we like to, the web is accessible.
It gives us everything that we need to make accessible websites and we like to break that. And so, we can integrate kind of like automated accessibility testing because it's not something that we're always, that not everyone is thinking about upfront. But you can do this and then one you can learn if you don't know.
That's the case at Society of Grownups.
We have a accessibility testing thing that nothing gets put in if the markup is not accessible. This also doesn't mean that like everything is accessible. It's not like a blanket like, you're 100% right, you should still be trying to use your website with screen readers and seeing how that experience is.
Maybe try not using your mouse for a day and see what happens but this is a great first step.
This is one that you don't have to automated. There are tools that you can run on the command line that you give it a URL and it tells you specifically like hey, this element is wrong and here's why it's wrong. And then you can go and find out how to fix it. It's pretty great.
It also isn't just strictly code.
It does, these tools will also take in account like color contrast and things like that.
And so, if you're trying to teach your design team anything about accessibility, this might be a great thing to show them as well to kind of like sit down with them, and be like, "Hey, let's check this out "and let's read about this together." We're really collaborative at Society of Grownups so I love doing things like that.
If I can educate design on how code works and how what they're doing affects us, we have found that it's just like, it help with our workflow a lot.
You can try that.
This one there's not as many tools.
There are a lot of framework specific ones. We use Ember LI Testing.
LI and Pallet are out there for your non-framework-specific situations. And hopefully we kind of start seeing more of these and maybe you don't need more tools, but maybe things that are a bit more flexible for different situations and stuff like that. Preference testing.
We will hear some stuff about preference later today, we hear some stuff about it yesterday.
And if you don't know about establishing preference budgets or that's not really a thing at your organization, essentially it's, you kind of set like rules and numbers about how many bytes of images do we want to have in our application. Tim has really great information on his website about establishing performance budgets.
The thing about them is that they're kind of hard to stick to.
I've talked with a lot of people who were like yeah, we started off and we had this budget but as we build we kind of just like fudged the number kind of and just kind of ignore it.
And so, if you're in that case there are ways that you can kind of try to keep yourself honest about those numbers that you initially set.
And so, there's a lot of different options that you have to kind of test again.
You can do just kind of the raw like how many seconds it takes to get completed, how many requests were there, how many bytes altogether.
But also into account speed index which is a bit more of the actual numbers versus the perceived performance kind of thing. You don't have to write any actual test for that, you just set it up in your build tool configuration and it will be like hey, no, you have way too many images here or you blew your budget.
That might be something to look into if you're in that situation.
Again, not like a ton here right now.
I think performance budgets are still a thing that a lot of people are adopting.
But as we're adopting we can again, play around with these things.
See what we don't have that we want to see with them. This is, I literally was in bed yesterday morning and read about this next thing and figure this in the presentation because I thought it was really cool and important which also just kind of shows that this is still kind of an evolving ecosystem, right? There's a thing called monkey testing that is named after the infinite monkey theorem which is the one that says if you put a bunch of monkeys in the room with a bunch of typewriters, at some point they will create the entire works of Shakespeare.
Monkey testing is all about, I don't know if I've had clients like this where you have a review and you're like giving them advice and you load up the application, it's like, "Here it is." And then they take it and they just kind of like start slamming on it, and they're like, "What's going on? "I don't know how to use this." This kind of is that in automated fashion.
This is a gift of this framework that was just really is called gremlins.js that actually will, these red circles are actual gremlins interacting with the webpage.
They randomly just kind of start clicking around filling stuff out to see what it can break which I think is really great because if your stuff doesn't break under really intense situations like this, you're probably good in normal situations at least most of the time.
And so, this is kind of that thinking of the kind of weird situations that end up happening. Again, this isn't really...
(laughs) I love this, this is so good.
Yeah, you make a hoard of gremlins and you unleash them in your application.
And again, this is not writing the test, this is just setting it up and letting it run and seeing what happens.
This is that at its most basic form.
There are different things that you can sort of include or not include on your situation.
If you don't have any forms maybe you don't want to try filling out forms and stuff like that.
And as these are running, they kind of log these things to the console so you can see exactly what they're doing.
And also tracking like the frames per second and kind of just watching that fall really hard as these gremlins unleash.
Yeah, as far as I know, this is the only one that does it and I only read about it yesterday.
If that's your thing, check it out.
I'm really pumped about that to see more like. The nice thing about this too is that with a lot of tests like you're the one writing the test and it's kind of in these very specific scenarios that you've planned out.
This is just like you don't know what's gonna happen. This is kind of like seat of your pants, wild west kind of testing and just letting kind of the random number generator decide what it wants to do that day.
We didn't talk about linting here.
Linting's great, I was talking about it yesterday during field of stock.
It's amazing but I personally and this is kind of a thing in the front end testing community that I'm maybe not entirely on it with.
I don't see linting as testing.
With testing as I kind of mentioned earlier, it's not necessarily about what you're inputting, it's about making sure that the outcome is always consistent.
Linting is kind of the exact opposite, it is very much about what are you doing and how are you doing it, and how are you rating things? It's super useful but it's, I wanted to touch on it but it's kind of its own I think category.
There's linting to check if there are styles that just aren't used in your application.
There's linting for framework specific stuff. We have linting for our Ember templates to make sure that we're writing them in a certain way. They're really awesome.
We have found that it removes this level of overhead form code reviews where it's like we don't have to be checking for single quote, double quote or did you remember your semicolons, or there's this extra space here.
We can really focus on the core concepts and architecture of that code instead of just like these nitty-gritty things.
There's a lot there.
You want to use all of them, right? Absolutely, you should do every single one. No.
You might not need unit testing and integration testing. That just might not be a thing that's worth your time. And so, you have to make sure just kind of like with everything in development, you have to really look at the problems you're trying to solve and say, what's kind of the real solution here, instead of just throwing everything at it and seeing what happens.
Figuring out like where are the things that usually break? Is it on the visual side? Is it with actual logical things? Where do we spend most of our time fixing bugs? What areas do we see regress the most? Does our code just kind of look like it's written by 25,000 different people that, and you're like, what is this? It really depends.
You should be asking questions like these before really doing any of these.
Don't just pick a thing say like, "Well, everyone's talking about it so we should do it." Maybe it doesn't make sense for you.
That's totally a thing and that's totally okay. I don't know, just (mumbles) conversation about this slide. There are definitely situations where you can kind of like apply test to liberally.
I've been in situations where the test themselves were kind of just added for testing sake so we were at one point checking like, whenever you navigate it to pages that the header said locations or it was checking for the text of it and not just that you got to the page and it didn't explode on you.
Which are, when you think about it are fine kind of one off but when you start having tons of tests that are testing for things that maybe don't add the most value, it's time, right? It's time that you spend writing these and if they break you have to go in and say, was this a valid test in the first place.
I've also been in situations where we had a relatively small application and the client's test suite took 45 minutes. And so, a lot of people just run it and again, sometimes you just have stuff that is really large.
That's gonna need a test suite that takes that long but it's not always the case.
And so, that's another thing you kind of have to keep an eye out for.
It's saying like we're testing a three page of website, why is this taking 15 minutes? You can write test about anything, you can test things that aren't even your code. You can be testing things to make sure that jQuery is doing its thing, right? If you have something that's like we want to make sure this element got at it and you're adding it through some jQuery thing. You could test around that but is that really valuable I guess.
And so, it's really easy to add invaluable tests I found and so, you're gonna do it, it's gonna happen but I don't know, there's a thing that's too much. I would say some people agree and that's totally fine. But it's just a thing to kind of keep in the back of your heads as you're going about this. And again, these are pieces.
There's some logic but it can be covered by an acceptance test and then we're gonna keep all kind of the front end-specific tests.
Or maybe you have just like the most simple website that you're like it's three pages, we don't really have a lot of interaction.
We don't need acceptance testing, we're just gonna do these categories.
Or maybe you just start with one and you say, we're gonna start here first because we have a really small website, and we think this is gonna give us the most bang for our buck right out the gate. This is how ours is looking right now at Society of Grownups though I might add the monkey testing in there at some point.
And we're gonna involve that, right? We've talked about doing performance budgets and stuff like that so as we are prepared to start adding more of these we can do that and it's okay.
The other thing about visual regression testing too is that it's probably not great to add it very early in the development process.
If you have something very new because you're just kind of like, you're building things out as you go, right? And so, you might end up changing the way things look. And just in my experience, it's better to add it toward the end when things are pretty stable to kind of set up what that baseline actually is. Writing tests, anyone can do it but writing good test is a thing that I think a lot of us still have to learn myself included. I'm not going to call out which project this was but this was a thing I saw recently and I just, and we can like talk about why it's failing. It may be not, maybe they're not the test but 97% coverage and everything's kind of read. Some people strive for this 100% coverage ideal. I've worked in an environment like that and it is not ideal because that's when you start getting into like are you testing things that are actually of value to you? It's easy to write ones that don't test the right thing or maybe you wrote a test that never breaks and you look back on it and you're like, why doesn't this break? If a test never breaks there's probably something wrong to your test. (laughs) It should always fail at some point.
You should make sure that it can actually hit a failure state.
This is pretty important to know.
The following are a bunch of excuses that I hear a lot.
They're excuses that I made when I was first getting into this.
I understand the reluctancy.
Is a work code right? Yes, it is and no one's kind of covering that up. The focus on the code though is that it's where you balance your work, right? You can not write test and spend, have a team dedicated just to dealing with bugs that come up.
Or you can have tests kind of try to decrease that bug load and have your team build out new features.
I think people would be more enthusiastic about doing that than fixing bugs over and over again. You're still gonna have bugs.
Testing is not saying that you'll never have bugs. We are humans, we make mistakes and that's just gonna happen.
But we can make sure that we're not causing the same bugs over and over again.
You already have a code base, adding tests after the fact is really hard. Sometimes this is true especially with things like accessibility testing.
If you have a thing and you just try to do it all at once or with linting, you're gonna have a really bad time.
But with unit testing, acceptance testing, you can add that as you're fixing things, you know? You can add them one at a time.
You don't have to do everything at once.
It's okay to have just like five tests at first and as you start doing it that way, over time you will have a pretty robust test suite. Just start small, start somewhere.
If you're still building new features with old code bases, it's perfect candidate for writing tests to play around with it and see where you can add it in other places.
I've also found that it's really hard to kind of get everyone on board with testing.
We're very opinionated as developers I have found and so, sometimes you have people that are just like no, I'm not writing test. I'm not gonna write test.
And I found that that it's 'cause they don't really, again, like understand this full gamut that we have. They might be thinking like I don't want to write tests 'cause you're making me write unit test when maybe you're thinking about something else. And so I found it's really important to have these conversations, right? This is not something, if you're the lead to like put your foot down and say like, "We're gonna do test and we're gonna do it "and you're gonna like it." That never goes well and if people don't like it they're not gonna do it. And so, really finding ways to just educate your team, make it part of the process. Part of our code review process at Society of Grownups is that we review the test as well.
Everything gets code reviewed there and we make sure that we're, we come in on test too.
We say, "Hey, it looks like you're missing this really "kind of important branch of logic." Maybe you've read a test around it because it's not there. Or looking at a test and saying, hey, maybe this isn't the best way to test it but if you do this other way.
That's really a great way to also make it accountable. We call at people when they don't have tests for things that are really big and I've been guilty of that to be quite honest. It's all about education and talking with your team and figuring out what's gonna work for you. It might be a tough kind of thing to do at first but I think it's one of those things that if you're persistent and you kind of guide people through it the right way, that they'll be very much on board with it. If you're really jazzed about testing now there is Frontend Architecture for Design Systems is a great book that Micah describes these four pillars of front end architecture and testing is one of them.
And so, if you want to get other insight that's a great resource.
There's also this website that just came out, frontendtesting.com.
That they have a select channel and so if you're either really excited about this stuff or if you just have questions and are really stuck in how to start testing for yourself, a bunch of us are there and we're absolutely willing to answer any questions that you have. Feel free to join in.
There's not a lot of people in it right now and I'm really hoping we can get more folks, and there's channels for everything we talked about here today.
Yeah, testing it's not super glamorous.
People don't like to think about it but people also don't like to think about the plumbing in their house.
They want to think about how you decorate rooms and how you paint the outside, and taking car of your lawn.
But if you don't think about the plumbing when you're building a house, you're gonna have to clean a lot of poop by hand later on and no one really likes to do that, and testing's pretty much the same way.
And with that, thank you.
(clapping) - Thank you Alicia.
- You're welcome John.
- Great way to start the day.
Now we have time for a couple questions.
Some great tweeting out going on there.
Have any question for Alicia about what you're doing, what you're not doing. - [Alicia] You're riveting
testing questions. - [John] What we
should be doing, was that, Ben's got one, all right.
- Just wondering you've sort of touched on this but I wonder if you might want to go in slightly further which is the line between you're saying if you've got a QA team and the test that you're writing, the test that they're writing.
Do you think there's a good like guideline to a balance between that or do you think that really finally they should just jump in and learn all the test suites, and just learn how to write Selenium? Like which approach do you think is better? - That's a good question.
I think that for...
Both can learn in harmony.
We're sort of planning on having acceptance testing on both ends of it.
I think folks in QA might also, they can kind of like approach things differently, right? They have a different mindset of how they want to test things.
Maybe for development, the acceptance tests are pretty high level.
Maybe the QA wants to get a little more in depth and start testing for edge cases.
But it is a conversation to have because you don't want to have the exact same thing or maybe you do, I don't know. (laughs) I think that depends.
But at least knowing about it like, you should know what QA puts into their automated testing and QA should know what you put into your automated testing, and that communication should always exist. And there's probably gonna yeah, again, be some overlap with those but I can see both existing without too much trouble. - All right.
Anymore questions, have one, one or two more? No one wants to talk about testing.
We have to swap (mumbles) - [Alicia] I understand. (laughs) - [John] All right.
Well, you know where you can find Alicia.
She'll be here all day and she's close-- - We have one.
- Sorry, oh we do have a question.
Rob. All right.
- Oh god, you put me on the spot now.
(laughs) - Hi.
I've done a bunch of things where I try to drive the browser to make it click on things.
They all seem to be really flaky in really different ways on different browser. What'd the magic combination? Sorry, I'm putting you on the spot now.
- (laughs) I don't know if there's a magical combination. Again, some of this is like sometimes it is flaky but I'm gonna kind of chalk that up to maybe it's because not enough people are doing it and complaining that it's chalky. One of the kind of big problems is that a lot of the headless browsers are they're very limited and they're not always the most up-to-date of browsers and for what they're supposed to be mimicking. And I think that's because the few of us that are doing this is kind of like, okay, that's what we got.
But I think if enough people were like, hey, we want headless IE, could we get that? I think yeah, it's just enough, it's not enough people kind of saying, "Hey, this sucks and it can be better." Because there are things that could be better. A lot of these is kind of rocky sometimes.
Sorry, I don't have a magic (mumbles).
(laughs) - Mel, I think Andrew's got, right behind you has got a question.
When you're talking about integration testing, how are you communicating to your server that it is currently in an environment because you know, you've got the fixture problems and I've always done integration test and found that we've had to end up on the tools that the server talks.
What's your team getting the best results with? - Yeah, so for us right now, it's not too much of a problem because our front end and our API are actually entirely separate.
And so, we only use mocks.
But there are mocking libraries that will mock end points for you so you're not actually hitting your server. Yeah, especially if like it's not, you're not able to do that, those are good options. Yeah, mocking libraries are out there if you don't want to constantly kind of bury. Especially with like the monkey testing, you don't want to use your actual API with that probably. - (mumbles) sign up from security.
We have security testers who can do somethings that. - Yeah.
- At our live site and that's the one (mumbles). - Yeah. (laughs) Yeah, there's a place for it but maybe not all the time but yeah, there are mocking libraries available for most of these libraries.
- And it occurred to me to get young children. You just get them to use your website.
- There you go, yeah.
Well, it's like the drug user testing, right? - [John] Might be rolled on an iPad will break.
You know that, right? - [Alicia] Yeah. - [John] I can kind of...
Yeah, she can be consultant, I get her out there. - Bring me and I'm gonna break it.
- She'll come in now, she'll break your website. You can kind of (mumbles), watching little kids use computers like where they're really little, that's very, very interesting experience.