Override Jenkins stage Function

Recently, I needed a mechanism to identify, as part of a try/catch block, which stage in a Jenkins Groovy Scripted Pipeline was the last to execute before the catch block was called.

Jenkins does not currently store details about the last stage to run outside of the context of that specific stage. So, in other words env.STAGE_NAME is valid with a particular stage("I'm a stage"){ //valid here} block, but not in, say, a catch(Exception e) { // where was I called from? } block.

To get around this, I found a few examples, and cobbled together something that I believe will provide future functionality. I present to you the extensibleContextStage:

// Jenkins groovy stages are somewhat lacking in their ability to persist 
// context state beyond the lifespan of the stage
// For example, to obtain the name of the last stage to run, 
// one needs to store the name in an ENV varialble (JENKINS 48315)
// https://issues.jenkins-ci.org/browse/JENKINS-48315

// We can create an extensible stage to provide attitional context to the pipeline
// about the state of the currently running stage.

// This also provides a capability to extend pre- and post- stage operations

// Idea / base code borrowed from https://stackoverflow.com/a/51081177/11125318
// and from https://issues.jenkins-ci.org/browse/JENKINS-48315?focusedCommentId=321366&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-321366

def call(name, Closure closure) {
    env.BUILD_LAST_STAGE_STARTED = name
    try {
        stage(name) {
            def result = closure.call()
            return result
        }
        env.BUILD_LAST_STAGE_SUCCEEDED = name
    }
    catch(Exception ex) {
        env.BUILD_LAST_STAGE_FAILED = name
        throw ex;
    }
}

This is a drop-in replacement for stage(string name){ closure} blocks in a Jenkins Groovy Scripted Pipeline, but with the added benefit of additional environment variables:

  • env.BUILD_LAST_STAGE_STARTED
  • env.BUILD_LAST_STAGE_SUCCEEDED
  • env.BUILD_LAST_STAGE_FAILED

So, as a full example, one can now do this (which was previously awkward):


try {
    extensibleContextStage("Do some things")
    {
        //whatever
    }
    extensibleContextStage("Do some More things")
    {
       throw new Exception("MAYHEM!")
    }
    extensibleContextStage("Do some final things")
    {
        //whatever
    }
}
catch(Exception e){
    // at this point, with normal stage, we wouldn't know where MAYHEM came from,
    // but with extensibleContextStage, we can look at either
    // env.BUILD_LAST_STAGE_FAILED or  env.BUILD_LAST_STAGE_STARTED
    // to know that "Do some More things" was the offendign stage.
    // this is super handy to send "helpful" notifications to slack/email
}

I hope this helps someone (if even my future self)

Terminate a stuck Jenkins job

Sometimes (especially after an un-graceful process restart), Jenkins jobs will be stuck in a running state, and cancelling through the UI just doesn’t work.

Fortunately, jobs can be stopped via the Jenkins script console with this command (courtesy of https://stackoverflow.com/a/26306081/11125318):

Jenkins.instance.getItemByFullName("JobName")
                .getBuildByNumber(JobNumber)
                .finish(
                        hudson.model.Result.ABORTED,
                        new java.io.IOException("Aborting build")
                );

Obtaining Git Repo URL from Jenkins Changeset: Unsolved

I’m attempting to obtain a Git Repo URL from a Jenkins Change set in a Groovy Scripted Pipeline, but I keep running into the same issue: the browser property (obtained via .getBrowser()) on my hudson.plugins.git.GitChangeSetList object is undefined.

I’m running the below code (with inline “status comments”) from the Jenkins groovy script console in an attempt to extract the RepoUrl from Chagesets in a Jenkins Multi branch groovy scripted pipeline:

def job = Jenkins.instance.getItem("MyJenkinsJob") 
def branch = job.getItems().findAll({ 
            item -> item.getDisplayName().contains("Project/CreateChangeLogs")
        })

printAllMethods(branch[0].getFirstBuild()) //this works, and is a org.jenkinsci.plugins.workflow.job.WorkflowRun

def builds = branch[0].getBuilds()
def currentBuild = builds[0]

currentBuild.changeSets.collect { 
  printAllMethods(it) // this works too, and is a hudson.plugins.git.GitChangeSetList.
  // enumerated methods are equals(); getClass(); hashCode(); notify(); notifyAll(); toString(); wait(); createEmpty(); getBrowser(); getItems(); getKind(); getRun(); isEmptySet(); getLogs(); iterator(); 

  it.getBrowser().repoUrl // this fails
  // the error is java.lang.NullPointerException: Cannot get property 'repoUrl' on null object
}

I found the utility class for PrintAllMethods here (https://bateru.com/news/2011/11/code-of-the-day-groovy-print-all-methods-of-an-object/):

  void printAllMethods( obj ){
    if( !obj ){
        println( "Object is null\r\n" );
        return;
    }
    if( !obj.metaClass && obj.getClass() ){
        printAllMethods( obj.getClass() );
        return;
    }
    def str = "class ${obj.getClass().name} functions:\r\n";
    obj.metaClass.methods.name.unique().each{ 
        str += it+"(); "; 
    }
    println "${str}\r\n";
}

The API spec for GitChangeSetList indicates that in extends hudson.scm.ChangeLogSet which implements getBrowser(), so the call should be valid.

Additionally, the source for GitChangeSetList invokes the super() constructor with the passed browser object.

At this point, I’m probably going to continue diving through the source code until I figure it out.

It looks like this is a documented issue with Jenkins: https://issues.jenkins-ci.org/browse/JENKINS-52747

And a (somewhat) related StackOverflow post: https://devops.stackexchange.com/questions/3798/determine-the-url-for-a-scm-trigger-from-inside-the-build

VAInfo: Verify Hardware Accelerated Video Support

On Ubuntu (and possibly other) Linux distros, run vainfo to see which Intel QuickSync profiles are supported.

For example, these profiles are supported on an Intel Haswell chip:

libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Desktop - 1.7.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD

References:

Display HTTPS X509 Cert from Linux CLI

Recently, while attempting a git pull, I was confronted with the following error:

Peer's certificate issuer has been marked as not trusted by the user.

The operation worked on a browser on my dev machine, and closer inspection revealed that the cert used to serve the GitLab service was valid, but for some reason the remote CentOS Linux server couldn’t pull from the remote.

I found this post on StackOverflow detailing how to retrieve the X509 cert used to secure an HTTPS connection:

echo | openssl s_client -showcerts -servername MyGitServer.org -connect MyGitServer.org:443 2>/dev/null | openssl x509 -inform pem -noout -text

This was my ticket to discover why Git on my CentOS server didn’t like the certificate: the CentOS host was resolving the wrong DNS host name, and therefore using an invalid cert for the service.

And now a Haiku:

http://i.imgur.com/eAwdKEC.png

Git: Replace Root Commit with Second Commit

While migrating code between version control systems (in my case SourceGear Vault to Git using an open-source c# program called vault2git), it’s sometimes necessary to pre-populate the first commit in the target system.

This yields an empty commit (git commit -m "initial commit" --allow-empty) with a timestamp of today, which is chronologically out of order of the incoming change set migration.

After completing the migration, the second commit is actually the commit which I’d like to be the root.

It took me a while to figure this out, but thanks to
Greg Hewgill on Stack Overflow, I was able to replace the first commit of my branch with the second commit (and subsequently update the SHA1 hashes of all child commits) using this command:

git filter-branch --parent-filter "sed 's/-p <the__root_commit>//'" HEAD

Intermittently Slow IIS web site

TL;DR:

An issue in the Windows Management Instrumentation (WMI) performance counter collection process caused periodic system-wide performance degradation.


This issue became visible when our infrastructure monitoring software invoked specific WMI queries.


We disabled the specific WMI query set which was causing the performance issues, and the problem went away.

A few days ago one of our clients began reporting performance issues on one of their web sites. This site is an IIS web application responsible for rendering visualizations of very large data sets (hundreds of gigabytes). As such, the application pool consumes a corresponding amount of RAM (which is physically available on the server).

Normally, these sites (I manage a few hundred instances) are fast, with most queries returning in under 300ms; however, this one instance proved difficult. To make matters worse, the performance issues were intermittent: most of the time, the site was blazing fast, but sometimes the site would hang for minutes.

Given a few hours of observation, one of my team members noticed a correlation between the performance issues of the site and a seemingly unrelated process on the host: WmiPrvSe.exe

I began digging in, and was able to corroborate this correlation by looking at the process’s CPU usage over time (using ELK / MetricBeat to watch windows processes). Sure enough, there’s a direct correlation between WmiPrvSe.exe using ~3-4% CPU, and IIS logs indicating a timeTaken of greater than 90 seconds. This correlation also established an interval between instances of the issue: 20 minutes.

I fired up Sysinternals’ ProcMon.exe to get a better handle on what exactly WmiPrvSe.exe was doing during these so-called “spikes”. I observed an obscene count of Registry queries to things looking like Performance counters (RegQueryValue, RegCloseKey, RegEnumKey, RegOpenKey). Note that there are multiple instances of WmiPrvSe.exe running on the sytsem, but only one instance was “misbehaving:” the one running as NT AUTHORITY\SYTSEM (which also happens to have the lowest PID). The instances running as NT AUTHORITY\NETWORK SERVICE and as NT AUTHORITY\LOCAL SERVICE did not seem to be misbehaving.

Almost all of the registry keys in question contained the string Performance or PERFLIB; many (but not all) queries were against keys within HKLM\System\CurrentControlSet\Services.

I know that I have the Elastic Co.’s “Beats” agents installed on this host; could Metricbeat, or one of my other monitoring tools be the culprit? So, I tried disabling all of the “beats” agents (filebeat, metricbeat, winlogbeat, etc), but was still seeing these intermittent spikes in WmiPrvSe.exe CPU usage correlating with slow page loads from IIS.

Stumped, I searched for how to capture WMI application logs, and found this article: https://docs.microsoft.com/en-us/windows/desktop/wmisdk/tracing-wmi-activity.

I ran the suggested command (Wevtutil.exe sl Microsoft-Windows-WMI-Activity/Trace /e:true) and fired up Event Veiwer (as admin) to the above path. Bingo.

Log hits inMicrosoft-Windows-WMI-Activity/Trace included mostly checks against the networking devices:select __RELPATH, Name, BytesReceivedPersec, BytesSentPersec, BytesTotalPersec from Win32_PerfRawData_Tcpip_NetworkInterface

These WMI queries were executed by the ClientProcessId owned by nscp.exe.

I perused the source code for NSCP a bit, and discovered that the Network queries for NSCP are executed through WMI(
https://github.com/mickem/nscp/blob/master/modules/CheckSystem/check_network.cpp#L105 ); while the standard performance counter queries were executed through PDH (https://github.com/mickem/nscp/blob/master/modules/CheckSystem/pdh_thread.cpp#L132):

Something else I noticed was that the Microsoft-Windows-WMI-Activity/Operational log contained events directly corresponding to the issue at hand: WMIProv provider started with result code 0x0. HostProcess = wmiprvse.exe; ProcessID = 3296; ProviderPath = %systemroot%\system32\wbem\wmiprov.dll

Some more creative google searches yielded me an interesting issue in a GitHub repo for a different project: CPU collector blocks every ~17 minutes on call to wmi.Query #89 .

Sounds about right.

Skimming through the issue I see this; which sets off the “ah-ha” moment:

Perfmon uses the PDH library, not WMI. I did not test with Perfmon, but PDH is not affected.


leoluk commented on Feb 16, 2018 (https://github.com/martinlindhe/wmi_exporter/issues/89#issuecomment-366195581)

Now knowing that only NSCP’s check_network uses WMI, I found the documentation to disable the network routine in nscp’s CheckSystem module: https://docs.nsclient.org/reference/windows/CheckSystem/#disable-automatic-checks

I added the bits to my nsclient.ini config to disable automatic network checks, restarted NSCP, and confirmed the performance issue was gone:

[/settings/system/windows]
# Disable automatic checks
disable=network

I’ve opened an issue on NSCP’s GitHub page for this problem: https://github.com/mickem/nscp/issues/619

Related Reading:

Tail a file on Windows

On almost every Unix system, we have tail -f to watch the end of *really really big* files.

When faced with a 36GB log file on Windows the tooling is often lacking.

I borrowed / adapted a little PowerShell function to extract the last n log lines from a file, and write to a new file:
https://gist.github.com/crossan007/b5e8ac4579ba61eb1967315657406751

Partially borrowed from: https://stackoverflow.com/questions/36507343/get-last-n-lines-or-bytes-of-a-huge-file-in-windows-like-unixs-tail-avoid-ti

TIL: Java code in Jenkins pipelines run on the Master

I was trying to read a file with Java.io.File in a Jenkins Groovy Scripted Pipeline on a non-master node. I kept getting an exception that the file was not found (java.io.FileNotFoundException)

Turns out that Java code written in scripted pipelines (Groovy) runs on the master node: https://issues.jenkins-ci.org/browse/JENKINS-37577. This is as-designed behavior, and accessing files in the workspace on a non-master node should use the readFile function in the Pipeline Basic Steps DSL https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#pwd-determine-current-directory

I’m thoroughly embarrassed at how many failed Jenkins jobs and alerts I’ve triggered while discovering this.