Use PHP to query the Mimecast API

Mimecast has fairly detailed documentation and code examples for their API but nothing for PHP.

Here’s a basic PHP code example for their “Get Account” API endpoint. I used their Python sample code as a starting point. You’ll need to update the base URL and the keys to use it.


// Setup required variables
$baseUrl = '';
$uri = '/api/account/get-account';
$url = $baseUrl.$uri;
$accessKey = 'YOUR ACCESS KEY';
$secretKey = 'YOUR SECRET KEY';

// Generate request header values
$requestId = uniqid();
$hdrDate = gmdate('r');

// DataToSign is used in hmac_sha1
$dataToSign = implode(':', array($hdrDate, $requestId, $uri, $appKey));

// Create the HMAC SHA1 of the Base64 decoded secret key for the Authorization header
$hmacSha1 = hash_hmac('sha1', $dataToSign, base64_decode($secretKey), true);

// Use the HMAC SHA1 value to sign the hdrDate + ":" requestId + ":" + URI + ":" + appkey
$sig = base64_encode($hmacSha1);

// Create request headers
$headers = array(
    'Authorization: MC '.$accessKey.':'.$sig,
    'x-mc-req-id: '.$requestId,
    'x-mc-app-id: '.$appId,
    'x-mc-date: '.$hdrDate,
    'Content-Type: application/json',
    'Accept: application/json',

$payload = json_encode(array('data' => array()));

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);



Cloudways reuses SSH host keys on Vultr, Linode, AWS, and GCP

I was surprised to discover that Cloudways reuses the same SSH host key when provisioning Vultr, Linode, AWS, and GCP servers. Apparently SSH host key reuse isn’t a new problem and some fingerprints have been seen on as many as 250,000 devices.

I currently have two Vultr servers on Cloudways:

Vultr Server 1:

Vultr Server 2:

How do I get the server SSH fingerprint?

The easiest way is the run this at a command prompt (replacing hostname with the domain or IP address of the server):

ssh-keygen -E md5 -lf <( ssh-keyscan hostname 2>/dev/null )

Can I reproduce this on Vultr servers through Cloudways?

I created one in Dallas and was assigned the IP and got the fingerprint b7:73:78:ac:f4:9f:01:ad:b5:7e:e2:e6:a5:93:1c:a2. I created a second one in Dallas and was assigned the IP with the same fingerprint of b7:73:78:ac:f4:9f:01:ad:b5:7e:e2:e6:a5:93:1c:a2.

That fingerprint is in use on 269 servers in at least 5 countries.

What about Linode on Cloudways?

I created a Linode server in Dallas and was assigned the IP with the fingerprint 55:95:ea:0d:aa:37:0a:96:c6:ee:12:4f:50:9e:ab:9a. The second server in Dallas got the IP and the same fingerprint of 55:95:ea:0d:aa:37:0a:96:c6:ee:12:4f:50:9e:ab:9a.

That fingerprint is in use on 120 servers in at least 5 countries.

What about Amazon Web Services on Cloudways?

I created two servers and got the same fingerprint. It’s in use on 70 servers.

  • Location: Virginia, IP:, Fingerprint: da:ca:e9:fb:10:0b:61:19:5b:23:ca:39:36:60:ff:af
  • Location Virginia, IP:, Fingerprint: da:ca:e9:fb:10:0b:61:19:5b:23:ca:39:36:60:ff:af

What about Google Cloud Platform on Cloudways?

I created two servers and also got the same fingerprint. It’s in use on 47 servers.

Location: Iowa, IP:, Fingerprint: b2:d0:23:6d:90:50:27:e6:92:53:b6:98:0f:18:52:f8
Location: Iowa, IP:, Fingerprint: b2:d0:23:6d:90:50:27:e6:92:53:b6:98:0f:18:52:f8

What about Digital Ocean on Cloudways?

All three of the Digital Ocean servers I created had unique fingerprints and didn’t show up on Shodan:

  • Location: New York, IP:, Fingerprint: d1:76:7d:b0:2d:fc:9e:bb:f3:40:f7:53:b7:87:fe:ca
  • Location: San Francisco, IP:, Fingerprint: e8:3f:72:17:e3:ba:02:e6:e7:10:a9:4a:3d:d0:83:24
  • Location: San Francisco, IP:, Fingerprint: 2b:5d:55:7c:36:e7:04:86:17:66:ad:40:77:7e:cd:36

How bad is this?

It doesn’t appear this directly compromises my servers. However, it does allow an attacker to impersonate my server. They could achieve this by either stealing the private key off of another compromised Cloudways server or by provisioning a new Cloudways server and then finding a way to redirect SSH traffic to the impersonated server. Reusing an SSH key across multiple servers might make sense if all of them belonged to me. (It’s probably what Github and Bitbucket do so that their customers can use Git over SSH.) However, since these Cloudways servers are assigned to many customers, it’s a risk, and it’s definitely not good.

TL;DR: There’s a possibility of an impersonation/MitM attack but intercepting and successfully reading the encrypted SSH traffic shouldn’t be possible.

It also makes it easy to identify a large list of servers provisioned by Cloudways. If someone found a vulnerability on a Cloudways server, this would make it easy to target the vulnerable servers.

Is Cloudways going to fix this?

I reported this via live chat on October 3, 2019 and then to their privacy email address on October 3, 2019, and heard back on October 9, 2019. They don’t seem to consider this a major issue, but they did say they are working on fixing it.

DMARC Monitoring Tools Comparison

I’ve been testing DMARC monitoring tools in order to get my personal and work domains to DMARC enforcement. Here’s what I’ve learned from testing a handful of different services.


Valimail may have the best available product, but I believe they have priced themselves out of the small business market. I received a demo but did not have a chance to use their product hands on. Their SPF macro expansion tool is impressive and having them fully manage your SPF and DKIM DNS records is incredibly convenient.


Fraudmarc does not provide enough detail to fully understand why your email sources are not compliant. For a given source, it tells you if it was SPF and/or DKIM aligned, but if it isn’t it doesn’t tell you whether it was aligned to another domain or none at all. They do provide an appealing SPF flattening tool called SPF Compression. They provide a free plan for “low message volumes.”

DMARC Analyzer

DMARC Analyzer has an attractive website that adequately conveys which of your sources are compliant or why they are failing. It requires more clicking to expand details than I would like, but it’s functional. They provide a free plan for up to 100k monthly DMARC complaint messages.


250ok provides a suite of tools to help monitor and improve your email deliverability. Their DMARC reporting interface requires a little too much mousing over to see details, but it’s functional. Unfortunately, their system doesn’t differentiate between a message that passes SPF and/or DKIM and is aligned with DMARC. For example, a non-whitelabled email sent with SendGrid could pass SPF with domain, but it would fail DMARC because the from address says 250ok considers this DMARC compliant even though it is not. 250ok says these messages are ARC compliant, but their system doesn’t yet have a way to convey that to the user. (See below.) 250ok’s own domain is also not set to enforce a policy.


Dmarcian’s user interface is a little rough around the edges, but it does the best job of conveying which of your sources are complaint, which are failing, and why they are failing. They provide a free plan for up to 10k monthly DMARC compliant messages and up to two domains (sub-domains are counted separately). (A few months ago the limit was 100k monthly emails, and then they dropped it to 50k monthly emails. 10k monthly emails seems to be a very recent change.)

ARC Support

ARC is a method of validating forwarded emails that would otherwise fail DMARC validation. It’s still not fully supported, but more mailbox providers seem to be recognizing it. As I stated above 250ok is parsing it but not yet doing a good job of showing the results. DMARC Analyzer says it’s on their roadmap, but they have not yet implemented it. Dmarcian was not aware of ARC and seemed skeptical even though I provided them with links to the specification. I do not know the status of ARC support for Valimail or Fraudmarc.


Overall Dmarcian seems to provide the most useful analytics for low volume domains or at a reasonable price. If you are looking for a free option or need to monitor a lot of (sub-)domains, DMARC Analyzer may be a better choice.

Using TCPDF with Symfony 2

Using TCPDF with Symfony2 is pretty simple. However there are a few problems that may arise.

namespace Acme\DemoBundle\Controller;

class PdfController extends Controller
    public function pdfAction()
        $pdf = new \TCPDF();

        // Construct the PDF.


Easy enough. The PDF loads, but we get this error in the logs:

request.CRITICAL: Uncaught PHP Exception LogicException: "The controller must return a response (null given). Did you forget to add a return statement somewhere in your controller?"

We can fix that:

namespace Acme\DemoBundle\Controller;

use Symfony\Component\HttpFoundation\Response;

class PdfController extends Controller
    public function pdfAction()
        $pdf = new \TCPDF();

        // Construct the PDF.


        return new Response(); // To make the controller happy.

Add some authentication and remember me tokens, close the browser, relaunch the browser, and visit the PDF page. We get a new error:

request.CRITICAL: Uncaught PHP Exception RuntimeException: "Failed to start the session because headers have already been sent by "[...]/vendor/"

Darn. TCPDF::Output() sends headers before Symfony has the chance. We can fix that too:

namespace Acme\DemoBundle\Controller;

use Symfony\Component\HttpFoundation\StreamedResponse;

class PdfController extends Controller
    public function pdfAction()
        $pdf = new \TCPDF();

        // Construct the PDF.

        return new StreamedResponse(function () use ($pdf) {

Perfect. Now Symfony and TCPDF::Output() can both send their headers, and everything plays nice.

OpenVZ Ubuntu 12.04 Upgrade to 14.04 Logging Problems

I recently ran do-release-upgrade on an OpenVZ VPS running Ubuntu 12.04. The process was surprisingly smooth, and I ended up with a functional install of Ubuntu 14.04. However, after a couple days, I realized that nothing was getting logged (auth.log, mail.log, syslog, etc.). Nginx logs continued working just fine. Upon further review of what was installed, upgraded, and removed, I realized that sysklogd was uninstalled, but nothing was installed to replace it. I ran:

aptitude install rsyslog

and now everything appears to be logging as expected.

I’m not sure if this was a problem because of Ubuntu, OpenVZ, or my hosting company. Regardless, it’s fixed now.

Symfony 2: Using @ParamConverter with multiple Doctrine entities

Symfony 2 describes how to use parameter converters to translate slugs to entities, but their example does not enforce the relationship between the two entities. Here’s their example:

 * @Route("/blog/{date}/{slug}/comments/{comment_slug}")
 * @ParamConverter("post", options={"mapping": {"date": "date", "slug": "slug"}})
 * @ParamConverter("comment", options={"mapping": {"comment_slug": "slug"}})
public function showAction(Post $post, Comment $comment)

Assuming that Post has id and slug attributes and that Comment has id, post, and slug attributes, the above example does not require that the Post slug in the URL match the Comment’s Post. Here’s an example that requires that the Post and Comment are related:

 * @Route("/blog/{post_date}/{post_slug}/comments/{slug}")
 * @ParamConverter("post", options={"mapping": {"post_date": "date", "post_slug": "slug"}})
public function showAction(Post $post, Comment $comment)

This example works because Post is processed first and because $post is named the same as the Comment::$post relationship. When it comes time to process the Comment, it attempts to use slug and post to find the Comment instead of just slug. (No @ParamConverter annotation is necessary for the Comment because the parameters are named the same as the attributes.) If $post doesn’t match $comment->post, a 404 is returned, no additional checks required.

Vim: Indext PHP case and default statements in a switch block

By default, Vim does not indent case and default statements inside of a switch block in a PHP file:

switch ($foo) {
case 'bar':
    // do something

It turns out a single line in one’s vimrc can fix that:

let g:PHP_vintage_case_default_indent = 1

Now indenting is correct:

switch ($foo) {
    case 'bar':
        // do something

USB Flash Drive for a 2012 Honda Civic

I recently purchased a PNY Attaché USB Flash Drive to plug into our 2012 Honda Civic. I was rather disappointed when I discovered that the drive was “unsupported” by the vehicle. It also appears that I am not the only one to have this problem. I returned the drive and purchased a SanDisk Cruzer Glide USB Flash Drive instead. The Honda Civic recognized it immediately and began playing the loaded music.

It’s worth noting that the Honda Civic only appears to show folders that contain music. Therefore, Artist > Album > Song will be displayed as Album > Song.

Fix a Seized Up Computer Fan

I’ve had some noisy and stuck 120mm computer fans in both my desktop and my server. I initially planned on replacing them but realized that all they really needed was some oil. The process is pretty simple and well outlined by TechRepublic. I removed the label and the small plug, put a drop of (sewing) machine oil on the shaft, gave it a spin, and reassembed. It’s nice to have some quiet in the home office again.

Dell Latitute D620 Laptop and NVIDIA Drivers for Ubuntu

After a recent update to Ubuntu 12.04, my Dell Latitude D620 Laptop quit mirroring the display across the laptop screen and the television connected to the dock’s S-Video port. I had previously chosen not to upgrade to Ubuntu 12.10 because of this issue, and 12.04 had worked correctly until today. It seems that the driver installed quit supporting some of the options I needed:

root@ishta:~# nvidia-xconfig --twinview
nvidia-xconfig: unrecognized option: "--twinview"

Invalid commandline, please run `nvidia-xconfig --help` for usage information.

It appears that installing an older version of the NVIDIA drivers:

aptitude install nvidia-173

and removing the current version of the drivers:

aptitude purge nvidia-current

solves the problem. Hopefully this will work for future versions of Ubuntu as well.