Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Unit testing #671
Unit testing #671
Comments
@Bloke I can start working on Unit/Functional/Assertion tests if you want. I use Codeception for my test runner. |
I do this all the time in Codeception via the webdriver package. |
Hey I like the look of Codeception. I'm going to investigate that, thanks for the tip. |
Bumping this. With the server move, I think we have capacity to self-host Codeception. I'll have a look at the technicals and see how viable it might be. |
Another possible contender: |
@petecooper If I was to throw a DB backup .sql file and a bunch of files/images in a zip archive at you, that had a complete Txp content environment built into it, would it be possible to use that as a "starting database" to build a testing environment on the demo server in the upcoming tiered branch structure? Basically, I'd like this to happen on one particular DB environment during repave:
Thus, we have a well-known environment rebuilt every few hours. In this environment would be a section called /test. The page template there would call in a bunch of forms that would render an entire page chock full of tag output. The forms would be grouped by tag or type, to make it simpler to find ones to update whenever we need them (see later). Thus if anyone visits /test they get an entire page full of output, to which we know what is should look like at an HTML level. Bonus points if we can visit /test/some-group to only run the tests in that particular form. In a parallel page, say /results are a bunch of forms that contain the respective expected results of the test: as HTML. Again, /results/some-group would only render the expected results of the given group. When you visit /test/... it runs the given test(s) and at the end of each group, it runs diff() on the actual output compared with the expected output fetched from its corresponding page in the /results section. It throws out a report at the end of each section highlighting the differences. Bonus marks if we collate these into a kind of summary report that we display at the top of the page after the run has completed, stating how many tests were run, how many failed, and which sections they were in, with hyperlinks to the anchor further down the page allowing us to inspect the contents. The plan is to construct this 'expected results' set of Forms from one or more previous versions of Txp. When anyone makes a change to dev, or 4.x.y, or wherever we target, this DB gets rebuilt with the known good data on a brand new bleeding edge environment. From time to time after we make any changes that affect tags, one of us visits this demo, runs /test and checks we haven't introduced regressions. Form time to time, we'll update the tests or the expected output Forms, construct a new DB, perhaps with different files/image and throw this at the server into a "build test environment from here" area. So next time the repave occurs, we get the new tests built in. Further though: could this test environment actually be a repo off the main /textpattern github repository? Just a set of /files and /images and themes/test/pages and themes/test/forms that we can commit to, along with /setup/data/* and setup/articles/* and so forth as XML files. People could then add/tweak tests via PR or we could commit new tests and expected results as we go. Maybe instead of us supplying a bundled .zip, the test demo build process fetches the archive from this repo and stuffs it in its local environment prior to running auto-setup? That means the DB is built for us, pre-paved with content during setup. The only things the setup process can't install natively at the moment are images and files. Longer term, we could figure that out and offer XML import of such content. For now it would have to be a bootstrapped process that runs a post-install script (which Txp supports) to populate the two tables and move the files into place. Sorry for the brain dump, I'm just spinning ideas around in my head and wondering what we can do with the automated setup process here to build us an environment we can use for regression testing of our tag suite. Not entirely sure how we'd test Anyway, not sure how much of this is doable, but it seems with a little script glue and some belt and braces we could build ourselves a roll front-end test environment that pretty much looks after itself and allows us to periodically diff things to check the integrity of the tag parser and tag suite. Edit: this would probably only need a front side login not public admin logins. Maybe behind the scenes we have a couple of private admin logins for testing. Thoughts on all of this? Improvements? |
High level - doable, yeah. I'd be inclined to patch a database rather than splat it with a replacement (perhaps this is what you mean, excuse any misunderstanding). Ideally, I'd put it into the preflight build script, and fetch whatever file(s) are needed prior to running the install. There is already some patching done to squirt thousands of users into each install, so presumably easy to do the same with content…which then begs the question of whether we maintain a repo for this test suite stuff and then merge it in at install time, so it happens as part of the initial setup, rather than a db bodge after the fact. |
Yeah that's what I meant in the latter half of my missive. Merge some repo containing contents prior to build, then maybe a post install script to mop up the stuff we can't inject that way. |
Sorted. Let's do that. |
Repo added: https://github.com/textpattern/unit-test Will start to populate it with content over time then once it has a few tests in place, we can try it out. |
It's high time we introduced a test suite alongside the development process to help catch regressions. As we move towards a more class-based ecosystem with less reliance on global variables, this should become achievable with something like PHPUnit.
This will be an ongoing issue, so no milestone required. It can be closed when a framework is in place and the tests start being added. At minimum we could do with a couple of pages that stress-test the admin and public sides and report yay/nay to each test with colour-coded results so we can see at a glance if anything's broken due to our meddling.
The AJAX stuff might prove challenging to test. Does anyone have any experience with unit testing client-side requests to the server?