Monday, December 22, 2014

Run mozharness talos as a developer (Community contribution)

Thanks to our contributor Simarpreet Singh from Waterloo we can now run a talos job through mozharness on your local machine (bug 1078619).

All you have to add is the following:
--cfg developer_config.py 
--installer-url http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/firefox-37.0a1.en-US.linux-x86_64.tar.bz2

To read more about running Mozharness locally go here.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, December 09, 2014

Running Mozharness in developer mode will only prompt once for credentials

Thanks to Mozilla's contributor kartikgupta0909 we now only have to enter LDAP credentials once when running the developer mode of Mozharness.

He accomplished it in bug 1076172.

Thank you Kartik!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Monday, December 08, 2014

Test mozharness changes on Try

You can now push to your own mozharness repository (even a specific branch) and have it be tested on Try.

Few weeks ago we developed mozharness pinning (aka mozharness.json) and recently we have enabled it for Try. Read the blog post to learn how to make use of it.

NOTE: This currently only works for desktop, mobile and b2g test jobs. More to come.
NOTE: We only support named branches, tags or specific revisions. Do not use bookmarks as it doesn't work.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Monday, November 24, 2014

Pinning mozharness from in-tree (aka mozharness.json)

Since mozharness came around 2-3 years ago, we have had the same issue where we test a mozharness change against the trunk trees, land it and get it backed out because we regress one of the older release branches.

This is due to the nature of the mozharness setup where once a change is landed all jobs start running the same code and it does not matter on which branch that job is running.

I have recently landed some code that is now active on Ash (and soon on Try) that will read a manifest file that points your jobs to the right mozharness repository and revision. We call this process to "pin mozhaness". In other words, what we do is to fix an external factor to our job execution.

This will allow you to point your Try pushes to your own mozharness repository.

In order to pin your jobs to a repository/revision of mozharness you have to change a file called mozharness.json which indicates the following two values:
  • "repo": "https://hg.mozilla.org/build/mozharness",
  • "revision": "production"


This is a similar concept as talos.json introduced which locks every job to a specific revision of talos. The original version of it landed in 2011.

Even though we have a similar concept since 2011, that doesn't mean that it was as easy to make it happen for mozharness. Let me explain a bit why:

  • For talos, mozharness has been checking out the right revision of talos.
  • In the case of mozharness, we can't make mozharness check itself out.
    • Well, we could but it would be a bigger mess
    • Instead we have made buildbot ScriptFactory be a bit more flexible
Coming up:
  • Enable on Try
  • Free up Ash and Cypress
    • They have been used to test custom mozharness patches and the default branch of Mozharness (pre-production)
Long term:
  • Enable the feature on all remaining Gecko trees
    • We would like to see this run at scale for a bit before rolling it out
    • This will allow mozharness changes to ride the trains
If you are curious, the patches are in bug 791924.

Thanks for Rail for all his patch reviews and Jordan for sparking me to tackle it.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, November 06, 2014

Setting buildbot up a-la-releng (Create your own local masters and slaves)

buildbot is what Mozilla's Release Engineering uses to run the infrastructure behind tbpl.mozilla.org.
buildbot assigns jobs to machines (aka slaves) through hosts called buildbot masters.

All the different repositories and packages needed to setup buildbot are installed through Puppet and I'm not aware of a way of setting my local machine through Puppet (I doubt I would want to do that!).
I managed to set this up a while ago by hand [1][2] (it was even more complicated in the past!), however, these one-off attempts were not easy to keep up-to-date and isolated.

I recently landed few scripts that makes it trivial to set up as many buildbot environments as you want and all isolated from each other.

All the scripts have been landed under the "community" directory under the "braindump" repository:
https://hg.mozilla.org/build/braindump/file/default/community

The main two scripts:

If you call create_community_slaves_and_masters.sh with -w /path/to/your/own/workdir you will have everything set up for you. From there on, all you would have to do is this:
  • cd /path/to/your/own/workdir
  • source venv/bin/activate
  • buildbot start masters/test_master (for example)
  • buildslave start slaves/test_slave
Each paired master and slave have been setup to talk to each other.

I hope this is helpful for people out there. It's been great for me when I contribute patches for buildbot (bug 791924).

As always in Mozilla, contributions are always welcome!

PS 1 = Only tested on Ubuntu. If you want it to port this to other platforms please let me know and I can give you a hand.

PS 2 = I know that there is a repository that has docker images called "tupperware", however, I had these set of scripts being worked on for a while. Perhaps someone wants to figure out how to set a similar process through the docker images.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, September 25, 2014

Making mozharness easier to hack on and try support

Yesterday, we presented a series of proposed changes to Mozharness at the bi-weekly meeting.

We're mainly focused on making it easier for developers and allow for further flexibility.
We will initially focus on the testing side of the automation and make ground work for other further improvements down the line.

The set of changes discussed for this quarter are:

  1. Move remaining set of configs to the tree - bug 1067535
    • This makes it easier to test harness changes on try
  2. Read more information from the in-tree configs - bug 1070041
    • This increases the number of harness parameters we can control from the tree
  3. Use structured output parsing instead of regular where it applies - bug 1068153
    • This is part of a larger goal where we make test reporting more reliable, easy to consume and less burdening on infrastructure
    • It's to establish a uniform criteria for setting a job status based on a test result that depends on structured log data (json) rather than regex-based output parsing
    • "How does a test turn a job red or orange?" 
    • We will then have a simple answer that is that same for all test harnesses
  4. Mozharness try support - bug 791924
    • This will allow us to lock which repo and revision of mozharnes is checked out
    • This isolates mozharness changes to a single commit in the tree
    • This give us try support for user repos (freedom to experiment with mozharness on try)


Even though we feel the pain of #4, we decided that the value gained for developers through #1 & #2 gave us immediate value while for #4 we know our painful workarounds.
I don't know if we'll complete #4 in this quarter, however, we are committed to the first three.

If you want to contribute to the longer term vision on that proposal please let me know.


In the following weeks we will have more updates with regards to implementation details.


Stay tuned!



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, September 16, 2014

Which builders get added to buildbot?

To add/remove jobs on tbpl.mozilla.org, we have to modify buildbot-configs.

Making changes can be learnt by looking at previous patches, however, there's a bit of an art to it to get it right.

I just landed a script that sets up buildbot for you inside of a virtualenv and you can pass a buildbot-config patch and determine which builders get added/removed.

You can run this by checking out braindump and running something like this:
buildbot-related/list_builder_differences.sh -j path_to_patch.diff

NOTE: This script does not check that the job has all the right parameters once live (e.g. you forgot to specify the mozharness config for it).

Happy hacking!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, September 11, 2014

Run tbpl jobs locally with Http authentication (developer_config.py) - take 2

Back in July, we deployed the first version of Http authentication for mozharness, however, under some circumstances, the initial version could fail and affect production jobs.

This time around we have:

  • Remove the need for _dev.py config files
    • Each production config had an associated _dev.py config file
  • Prevented it from running in production environment
    • The only way to enable the developer mode is by appending --cfg developer_config.py
If you read How to run Mozharness as a developer you should see the new changes.

As quick reminder, it only takes 3 steps:

  1. Find the command from the log. Copy/paste it.
  2. Append --cfg developer_config.py
  3. Append --installer-url/--test-url with the right values
To see a real example visit this


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, August 07, 2014

mozdevice now mozlogs thanks to mozprocess!



jgraham: armenzg_brb: This new logging in mozdevice is awesome!
armenzg: jgraham, really? why you say that?
jgraham
: armenzg: I can see what's going on!
We recently changed the way that mozdevice works. mozdevice is a python package used to talk to Firefox OS or Android devices either through ADB or SUT.

Several developers of the Auto Tools team have been working on the Firefox OS certification suite for partners to determine if they meet the expectations of the certification process for Firefox OS.
When partners have any issues with the cert suite, they can send us a zip file with their results so we can help them out. However, until recently, the output of mozdevice would go to standard out rather than being logged into the zip file they would send us.

In order to use the log manger inside the certification suite, I needed every adb command and its output to be logged rather than printed to stdout. Fortunately, mozprocess has a parameter that I can specify which allows me to manipulate any output generate by the process. To benefit from mozprocess and its logging, I needed to switch every subprocess.Popen() call to be replaced by ProcessHandler().

If you want to "see what's going on" in mozdevice, all you have to do is to request the debug level of logging like this:
DeviceManagerADB(logLevel=mozlog.DEBUG)
As part of this change, we also switched to use structured logging instead of the basic mozlog logging.

Switching to mozprocess also helped us discover an issue in mozprocess specific to Windows.

You can see the patch in here.
You can see other changes inside of the mozdevice 0.38 release in here.

At least with mozdevice you can know what is going on!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, July 15, 2014

Developing with GitHub and remote branches

I have recently started contributing using Git by using GitHub for the Firefox OS certification suite.

It has been interestting switching from Mercurial to Git. I honestly believed it would be more straight forward but I have to re-read again and again until the new ways sink in with me.

jgraham shared with me some notes (Thanks!) with regards what his workflow looks like and I want to document it for my own sake and perhaps yours:
git clone git@github.com:mozilla-b2g/fxos-certsuite.git

# Time passes

# To develop something on master
# Pull in all the new commits from master

git fetch origin

# Create a new branch (this will track master from origin,
# which we don't really want, but that will be fixed later)

git checkout -b my_new_thing origin/master

# Edit some stuff

# Stage it and then commit the work

git add -p
git commit -m "New awesomeness"

# Push the work to a remote branch
git push --set-upstream origin HEAD:jgraham/my_new_thing

# Go to the GH UI and start a pull request

# Fix some review issues
git add -p
git commit -m "Fix review issues" # or use --fixup

# Push the new commits
git push

# Finally, the review is accepted
# We could rebase at this point, however,
# we tend to use the Merge button in the GH UI
# Working off a different branch is basically the same,
# but you replace "master" with the name of the branch you are working off.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Friday, July 11, 2014

Introducing Http authentication for Mozharness.

A while ago, I asked a colleague (you know who you are! :P) of mine how to run a specific type of test job on tbpl on my local machine and he told me with a smirk, "With mozharness!"

I wanted to punch him (HR: nothing to see here! This is not a literal punch, a figurative one), however he was right. He had good reason to say that, and I knew why he was smiling. I had to close my mouth and take it.

Here's the explanation on why he said that: most jobs running inside of tbpl are being driven by Mozharness, however they're optimized to run within the protected network of Release Engineering. This is good. This is safe. This is sound. However, when we try to reproduce a job outside of the Releng network, it becomes problematic for various reasons.

Many times we have had to guide people who are unfamiliar with mozharness as they try to run it locally with success. (Docs: How to run Mozharness as a developer). However, on other occasions when it comes to binaries stored on private web hosts, it becomes necessary to loan a machine. A loaned machine can reach those files through internal domains since it is hosted within the Releng network.

Today, I have landed a piece of code that does two things:
  • Allows Http authentication to download files behind LDAP credentials
  • Changes URLs to point to publicly reachable domains
This change, plus the recently-introduced developer configs for Mozharness, makes it much easier to run mozharness outside of continuous integration infrastructure.

I hope this will help developers have a better experience reproducing the environments used in the tbpl infrastructure. One less reason to loan a machine!

This makes me *very* happy (see below) since I don't have VPN access anymore.




Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Using developer configs for Mozharness

To help run mozharness by developers I have landed some configs that can be appended to the command appearing on tbpl.

All you have to do is:
  • Find the mozharness script line in a log from tbpl (search for "script/scripts")
  • Look for the --cfg parameter and add it again but it should end with "_dev.py"
    • e.g. --cfg android/androidarm.py --cfg android/androidarm_dev.py
  • Also add the --installer-url and --test-url parameters as explained in the docs
Developer configs have these things in common:
  • They have the same name as the production one but instead end in "_dev.py"
  • They overwrite the "exes" dict with an empty dict
    • This allows to use the binaries in your personal $PATH
  • They overwrite the "default_actions" list
    • The main reason is to remove the action called read-buildbot-configs
  • They fix URLs to point to the right public reachable domains 
Here are the currently available developer configs:














Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, July 03, 2014

Tbpl's blobber uploads are now discoverable

What is blobber? Blobber is a server and client side set of tools that allow Releng's test infrastructure to upload files without requiring to deploy ssh keys on them.

This is useful since it allows uploads of screenshots, crashdumps and any other file needed to debug what failed on a test job.

Up until now, if you wanted your scripts determine the files uploaded in a job, you would have to download the log and parse it to find the TinderboxPrint lines for Blobbler uploads, e.g.
15:21:18 INFO - (blobuploader) - INFO - TinderboxPrint: Uploaded 70485077-b08a-4530-8d4b-c85b0d6f9bc7.dmp to http://mozilla-releng-blobs.s3.amazonaws.com/blobs/mozilla-inbound/sha512/5778e0be8288fe8c91ab69dd9c2b4fbcc00d0ccad4d3a8bd78d3abe681af13c664bd7c57705822a5585655e96ebd999b0649d7b5049fee1bd75a410ae6ee55af
Now, you can look for the set of files uploaded by looking at the uploaded_files.json that we upload at the end of all uploads. This can be discovered by inspecting the buildjson files or by listening to the pulse events. The key used is called "blobber_manifest_url" e.g.
"blobber_manifest_url": "http://mozilla-releng-blobs.s3.amazonaws.com/blobs/try/sha512/39e400b6b94ac838b4e271ed61a893426371990f1d0cc45a7a5312d495cfdb485a1866d7b8012266621b4ee4df0cf9aa7d0f6d0e947ff63785543d80962aaf9b",
In the future, this feature will be useful when we start uploading structured logs. It will help us not to download logs to extract meta-data about the jobs!

No, your uploads are not this ugly
This work was completed in bug 986112. Thanks to aki, catlee, mtabara and rail to help me get this out the door. You can read more about Blobber by visiting: "Blobber is live - upload ALL the things!" and "Blobber - local environment setup".


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, July 01, 2014

Down Memory Lane

It was cool to find an article from "The Senecan" which talks about how through Seneca, Lukas and I got involved and hired by Mozilla. Here's the article.



Here's an excerpt:
From Mozilla volunteers to software developers 
It pays to volunteer for Mozilla, at least it did for a pair of Seneca Software Development students. 
Armen Zambrano and Lukas Sebastian Blakk are still months away from graduating, but that hasn't stopped the creators behind the popular web browser Firefox from hiring them. 
When they are not in class learning, the Senecans will be doing a wide range of software work on the company’s browser including quality testing and writing code. “Being able to work on real code, with real developers has been invaluable,” says Lukas. “I came here to start a new career as soon as school is done, and thanks to the College’s partnership with Mozilla I've actually started it while still in school. I feel like I have a head start on the path I've chosen.”  
Firefox is a free open source web browser that can...



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Friday, June 20, 2014

My first A-team project: install all the tests!


As a welcoming bug to the A-team I had to deal with changing what tests get packaged.
The goal was to include all tests on a tests.zip regardless if they are marked as disabled on the test manifests or not.

Changing it the packaging was not too difficult as I already had pointers from jgriffin, the problem came with the runners.
The B2G emulator and desktop mochitest runners did not read the manifests; what they did is to run all tests that came inside of the tests.zip (even disabled ones).

Unfortunately for me, the mochitest runners code is very very old and it was hard to figure out how to make it work as clean as possible. I did a lot of mistakes and landed it twice incorrectly (improper try landing and lost my good patch somewhere) - sorry Ryan!.

After a lot of tweaking it, reviews from jmaher and help from ted & ahal, it landed last week.

For more details you can read bug 989583.

PS = Using trigger_arbitrary_builds.py was priceless to speed up my development.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, June 11, 2014

Who doesn't like cheating on the Try server?

Have you ever forgotten about adding a platform to your Try push and had to push again?
Have you ever wished to *just* make changes to a tests.zip file without having to build it first?
Well, this is your lucky day!

In this wiki page, I describe how to trigger arbitrary jobs on you try push.
As always be gentle with how you use it as we all share the resources.

Go crazy!













Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, May 28, 2014

How to create local buildbot slaves


For the longest time I have wished for *some* documentation on how to setup a buildbot slave outside of the Release Engineering setup and not needing to go through the Puppet manifests.

On a previous post, I've documented how to setup a production buildbot master.
In this post, I'm only covering the slaves side of the setup.

Install buildslave

virtualenv ~/venvs/buildbot-slave
source ~/venvs/buildbot-slave/bin/activate
pip install zope.interface==3.6.1
pip install buildbot-slave==0.8.4-pre-moz2 --find-links http://pypi.pub.build.mozilla.org/pub
pip install Twisted==10.2.0
pip install simplejson==2.1.3
NOTE: You can figure out what to install by looking in here: http://hg.mozilla.org/build/puppet/file/ad32888ce123/modules/buildslave/manifests/install/version.pp#l19

Create the slaves

NOTE: I already have build and test master in my localhost with ports 9000 and 9001 respecively.
buildslave create-slave /builds/build_slave localhost:9000 bld-linux64-ix-060 pass
buildslave create-slave /builds/test_slave localhost:9001 tst-linux64-ec2-001 pass

Start the slaves

On a normal day, you can do this to start your slaves up:
 source ~/venvs/buildbot-slave/bin/activate
 buildslave start /builds/build_slave
 buildslave start /builds/test_slave


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, May 22, 2014

Technical debt and getting rid of the elephants

Recently, I had to deal with code where I knew there were elephants in the code and I did not want to see them. Namely, adding a new build platform (mulet) and running a b2g desktop job through mozharness on my local machine.

As I passed by, I decided to spend some time to go and get some peanuts to get at least few of those elephants out of there:

I know I can't use "the elephant in the room" metaphor like that but I just did and you just know what I meant :)

Well, how do you deal with technical debt?
Do you take a chunk every time you pass by that code?
Do you wait for the storm to pass by (you've shipped your awesome release) before throwing the elephants off the ship?
Or else?

Let me know; I'm eager to hear about your own de-elephantization stories.





Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tuesday, May 13, 2014

Do you need a used Mac Mini for your Mozilla team? or your non-for-profit project?

If so, visit this form and fill it up by May 22nd (9 days from today).
There are a lot of disclaimers in the form. Please read them carefully.

Main ones:

  • The Minis are in Santa Clara, California
    • You will have to arrange the pick up or order enough
  • They come without operating system
    • You have to take care of buying and installing it
    • Remember: Linux can run on them; you will have to figure it out


These minis have been deprecated after 4 years of usage. Read more about it in here.
From http://en.wikipedia.org/wiki/Mac_Mini

UPDATE: Highlighted a couple of disclaimers



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Monday, May 05, 2014

Releng goodies from Portlandia!

Last week, Mozilla's Release Engineering met at the Portland office for a team week.
The week was packed with talks and several breakout sessions.
We recorded a lot of our sessions and put all of them in here for your enjoyment! (with associated slide decks if applicable).

Here's a brief list of the talks you can find:
Follow us at @MozReleng and Planet Releng.

Many thanks to jlund to help me record it all.

UPDATE: added thanks to jlund.

The Releng dreams are alive in Portland














Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, April 23, 2014

Gaia code changes and how the trickle-down into Mozilla's RelEng CI

Too long; did not read: In our pushes' monthly report we more or less count all Gaia commits through the B2G-Inbound repository.

For the last few months, I've been creating reports about the pushes to the tbpl trees and I had to add some disclaimers about the code pushes to the Gaia repositories. I've decided to write the disclaimer in here and simply put a hyperlink to this post.

Contributions to the Gaia repositories are done through GitHub and are run through the Travis CI (rather than through the Release Engineering infrastructure). However, independently from the Travis CI, we bring the Gaia merges into the Release Engineering systems this way:
    • We mirror the github changes into our git setup (gaia.git)
    • These changes trigger the Travis CI
  • hg.mozilla.org
    • We convert our internal git repo to our hg repos (e.g. gaia-central)
    • There is a B2G Bumper bot that will change device manifests on b2g-inbound with gonk/gaia git changesets for emulator/device builds
    • There is a Gaia Bumper bot that will change device manifests on b2g-inbound with gaia hg changesets for b2g desktop builds
    • Those manifest changes indicate which gaia changesets to checkout
    • This will trigger tbpl changes and run on the RelEng infrastructure

Here's an example:

Long-story-short: Even though we don't have a Gaia tree on tbpl.mozilla.org, we test the Gaia changes through the B2G-Inbound tree, hence, we take Gaia pushes into account for the monthly pushes report.

For more information, the B2G bumper bot was designed in this bug.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Thursday, April 17, 2014

Mozilla's pushes - March 2014

Here's March's monthly analysis of the pushes to our Mozilla development trees (read about Gaia merges at the end of the blog post).
You can load the data as an HTML page or as a json file.

TRENDS

March (as February did) has the highest number of pushes EVER.
We will soon have 8,000 pushes/month as our norm.
The only noticeable change in the distribution of pushes is that non-integration trees had a higher share of the cake (17.80% on Mar. vs 14.60% on Feb.).

HIGHLIGHTS

  • 7,939 pushes
    • NEW RECORD
  • 284 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 435 pushes on March, 4th
    • NEW RECORD
  • 16.07 pushes/hour (average)

GENERAL REMARKS

Try keeps on having around 50% of all the pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes.

RECORDS

  • March 2014 was the month with most pushes (7,939 pushes)
  • March 2014 has the highest pushes/day average with 284 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • March 4th, 2014 had the highest number of pushes in one day with 435 pushes



DISCLAIMERS

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • Gaia pushes are more or less counted. I will write a blog post about in the near term.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, April 16, 2014

Kiss our old Mac Mini test pool goodbye

Today we have stopped running test jobs on our old Revision 3 Mac Mini test pool (see previous announcement).

There's a very very long list of people that have been involved in this project (see bug 864866).
I want to thank ahal, fgomes, jgriffin, jmaher, jrmuizel and rail for their help on the last mile.

We're very happy to have finally decommissioned this non-datacenter-friendly infrastructure.

A bit of history

These minis were purchased back in early 2010 and we bought more than 300 of them.
At first, we run on them Fedora 12, Fedora 12 x64, Windows Xp, Windows 7 and Mac 10.5. Later on we also added 10.6 to the mix (if my memory doesn't fail me).

Somewhere in 2012, we moved the Mac 10.6 testings to the revision 4 new mac server minis and deprecated the 10.5 rev3 testing pool. We then re-purposed those machines to increase the Windows and the Fedora pools.

By May of 2013, we stopped running Windows on them.
During 2013, we moved a lot of the Fedora testing to EC2.
Now, we managed to move the B2G reftests and Firefox debug mochitest-browser-chrome to EC2.

NOTE: I hope my memory does not fail me

Delivery of the Mac minis (photo credit to joduinn)
Racked at the datacenter (photo credit to joduinn)

CHANGES: fixed small typo. Remove labels to prevent re-posting into planet feeds.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, April 02, 2014

Mozilla's recent CI improvements saves roughly 60-70% on our AWS bill

bhearsum, catlee, glandium, taras and rail have been working hard for the last few months at cutting our AWS bills by improving Mozilla RelEng's CI.


From looking at it, I can say that with the changes they have made we're roughly saving the 60-70% on our AWS bill.

If you see them, give them a big pat on the back, this is huge for Mozilla.

Here’s some of the projects that helped with this:


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Friday, March 28, 2014

Mozilla's Pushes - February 2014

Here's February's monthly analysis (a bit late) of the pushes to our Mozilla development trees (Gaia trees are excluded).

You can load the data as an HTML page or as a json file.

TRENDS

  • We are staying on the 7,000 pushes/month range
  • Last year we only had 4 months with more than 7,000 pushes


















HIGHLIGHTS

  • 7,275 pushes
  • 260 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 421 pushes on 02/26
    • Current record is 427 on January
  • Highest pushes/hour (average): 16.57 pushes/hour
    • NEW RECORD

GENERAL REMARKS

  • Try keeps on having around 50% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes

RECORDS

  • August of 2013 was the month with most pushes (7,771 pushes)
  • February 2014 has the highest pushes/day average with 260 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • January 27th, 2014 had the highest number of pushes in one day with 427 pushes

DISCLAIMER

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • An attempt to gather again all data will be attempted sometime this year


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Wednesday, March 26, 2014

Moving away from the Rev3 Minis

On May last year we managed to move the Windows unit tests from the Rev3 Mac Minis to the iX hardware.

Back in November, we were still running some desktop and b2g jobs on the Rev3 minis on Fedora and Fedora64 for the *trunk* trees.

This was less than ideal not only because of the bad wait times (since the pool of minis is out of capacity) but also we're evacuating the SCL1 datacenter where those Rev3 minis are located at. To stop using the minis we needed to move to EC2 before April/May came around .

As of yesterday, we're running all jobs running on the minis as well as on the EC2 instances for all *trunk* trees and mozilla-aurora.

You can see the jobs running side-by-side on the minis and on EC2 in here:







Over the next few weeks you should see us moving away from the minis.

You can wait for the next blog post or follow along on bug 864866.

Stay tuned!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.