My home office – the peripherals

After talking about my new office desk, chair and monitor arm, I wanted to highlight the other tools on my desk.

Disclaimer: I’ve bought all my gear myself with my own, hard earned, money. So no sponsorship here! Which is why I can give you the good and the bad.

Because I like gaming and I love Logitech gear, I’ve got myself the G502 Hero gaming mouse. It’s a great mouse and I can’t decide if I like it more than my MX Master 2S. The G502 is attached to my desk and I use the MX when I take my laptop with me. The MX has a very long battery life and it can easily switch between multiple PC’s, but the G502 is a lot more sensitive and it’s a lot more customisable. For gaming, I love the G502, for coding and day to day use, I prefer the MX.

My keyboard is the Microsoft Comfort Curve 3000, there is already an upgrade: the Wireless Comfort Desktop 5050. I love the ergonomic design, it really is a lot more friendly on my wrists. I don’t think I want to go back to a normal keyboard. The keys type very smoothly. I just wish I could disable the CAPS LOCK KEY BECAUSE… damnit it happened again. That is a complaint irrespective of the type of keyboard; normal keyboards, ergonomic ones, they all still have a CAPS LOCK KEY… damnit. I think the Logitech Ergo K860 has the option to turn it off or remap it. That and it can switch between multiple PC’s like the MX Master 2S mouse.

To improve the video recording during teleconferences, I bought myself a Logitech BRIO webcam. This is a good upgrade from my laptop camera. The image looks more crisp and it recognises me in a flash. From friends I’ve heard that the image over Skype is not great, but I think that might be Skype as Teams, Zoom and Discuss do produce nice video. The audio is good, but I’ve heard that it’s a bit hollow compared to the mic on my headset.

My headset is a Logitech G933 surround sound one. I must say, that surround sound really is awesome. When I play video games, this really comes to life as I can clearly hear which direction the enemies are coming from. It’s comfortable, even when I game for a few hours (too few and far in between). The microphone is really superb as well. It captures a nice, warm sound and filters out background noise effortlessly.

The reason I focused so much on connecting to multiple PC’s earlier is because my laptop and PC share a desk. So I had to find a way to share my keyboard, mouse, headset and webcam between these two devices. Cue in the UGreen 2 In 4 Out USB Sharing Box. This little device allows me to plug in 4 USB devices: headset receiver, keyboard, mouse and webcam. With a single press of the button on top, I can switch to which device they are attached to. My PC streams to my screen on the Display Port input and my laptop connects via HDMI. Now I press the button on top of the UGreen box and I switch the input of the screen and I’m good to go. Unfortunately, I should have taken a bigger UGreen box, because if I want to share additional devices (say a USB microphone) I’ll have to start choosing.

The only thing that is directly plugged into my PC is my Logitech G13 gaming keyboard. I received this as a birthday gift from my wife. I was looking for something like this and I kept doubting. When my birthday rolled around, she presented it to me. I’ve used it for every game I’ve played since. I hope it doesn’t break as I don’t think they still make these. I can’t find a link to it on the Logitech site nor on local PC stores. The only reference I find is on Amazon for way too high a price. After a little fiddle with the key mappings, either to make a special profile for the game or to remap the keyboard config, I play way smoother with this device than any keyboard or gamepad. Best gift ever!

The last thing on my desk is the Roost laptop stand. It’s a bit pricey, but it’s the best laptop stand I’ve ever had. I’ve given my wife one a few years ago and it’s a very easy to set up stand. Much better control over the stands than the ones I’ve previously used in corporate settings. It also keeps a much better grip on my laptop.

Hopefully this series helps somebody make a decision while they are looking for a new keyboard or mouse, (standing) desk or ergonomic chair.

My home office – monitor arm

Third blog in a row about my office, this one is about the brand new monitor arm I got. This one took me by surprise as I did not think it would make such an impact on me.

Disclaimer: I’ve bought all my gear myself with my own, hard earned, money. So no sponsorship here! Which is why I can give you the good and the bad.

The last package to arrive is the monitor arm. I took a single one, as I only have one screen, my laptop screen functions as my secondary screen. I have used 2 dedicated screens in the past and this is something I want to go back to. Multiple big screens (27″ and up) are a lot more practical than my small laptop screen.

The monitor arm can be attached as a clamp or drilled through the table. The foot of the support column can be changed so the stand can be attached how you like. The part on the underside is a metal clamp and I could easily see how that would damage my brand new table. I have placed the second foot, which has a rubber sole, between the metal clamp and the underside of the table.

Underneath the table

With the column installed, I can attach the arm at the right height. This part feels a little unstable as I need to clamp it on the column. This did have me worried as I installed the arm, but it has not moved since I set it up.

The clamp on the column

Finding the right height can be a challenge though as the arm can’t easily be moved up or down once it’s been installed. If I loosen it up, I’ll need to adjust the height with the screen attached to it. Which I can assume is tricky and has the risk of the screen falling. That’s why I put it at the same height as my screen was when it was standing on it’s base. That way, I knew it was at a comfortable height.

When I prepared the screen to be attached to the arm, I was slightly worried that the arm would not be able to carry the weight of the screen. This fear proved to be unfounded as the Asus ROG PG279Q is really light and most of the weight is in the foot. The arm can easily hold the screen up and after I tightened the bolt that controls the tilt, it hasn’t budged from the angle I placed it in.

The Asus ROG is a great screen: it has nice colours and a good refresh rate, but I do wish I had taken the 4K version instead of the 2K version. 4K just looks a lot more smooth and is easier on the eyes. Especially when looking a whole day at code, mails and stack overflow. Maybe also a bit for gaming. But mostly for the code.

Attaching the screen to the arm did provide a problem. The mechanism would be super easy to use if the part that holds up the screen would stick out of the back of the screen. If you take a good look, the attachment is sunken into the case of the screen. This part slides over the end of the arm, so it’s really easy to install… normally. With the cool triangular design (that you never see), the sliding mechanism is blocked by the triangular part of the back. I attached the sliding mechanism to the arm and asked my wife to hold up the screen, while I screwed it to the attachment. Luckily, the screen itself is very light, but it was a tense moment anyway.

Screen attached to the arm

The arm can’t easily be adjusted in the height. Tilting and turning the screen side to side is very easy and I notice I use it to show my wife something if she’s standing next to me so she can more easily see what’s on the screen. The screen has a nice viewing angle, but staring directly at a screen instead of at an angle is always more fun.

Cable management is pretty easy although the plastic holders are a bit of squeeze for my HDMI cable. I made sure there is a bit of room on the end so I can turn my screen left and right without pulling on the cables. There is one cable that is not in the cable management, but that’s because my display port cable is not long enough to fit into the cable management holders.

There is one detail that annoys me. When I put the table in the standing position, I notice the screen wobbles if I type or touch the table (put a glass down, for example). When I type more slowly, it doesn’t happen, when I type harder or faster, it wobbles more noticeably. It’s subtle, but I notice it when I write while standing up. I know it’s the vibrations through the table and I can’t do anything about it. Unfortunately, that doesn’t make it any less annoying.

Now that my screen floats above my desk, I noticed that I have a lot more table space. The place that the foot of the screen took up is quite large and now it’s available for documents, my phone and keys. Maybe a microphone if I want to upgrade my audio setup. It’s a decent arm and I love the additional space on my desk, but next time I would look for an arm with a more sturdy base so it won’t wobble.

My home office – the chair

Last week, I wrote about the sturdy SmartDesk 2. This week, I review the comfortable new chair I’m sitting on while typing this. Oh, spoiler alert, the chair is living up to it’s expectations.

Disclaimer: I’ve bought all my gear myself with my own, hard earned, money. So no sponsorship here! Which is why I can give you the good and the bad.

The ErgoChair 2

The two ErgoChair 2 chairs were next to arrive, about one and a half month after ordering them. Before, I had a very comfortable Markus chair from Ikea (not the exact model as I bought mine about 8 years ago). It was a very comfortable chair, but I have to say that the ErgoChair 2 is an upgrade all around.

It started when I assembled the ErgoChair. The instructions are very clear and a lot of thought has gone into the assembly process. In a little over half an hour, I was done with one chair. The star of the show is the superb tool that is supplied. It makes tightening the bolts a breeze. No need to awkwardly grip the little metal tool that is normally supplied, with this tool the bolts are tightened in a flash.

The supplied tool

During assembly, I did make a silly mistake: I put the arm rests on backwards on the first chair. Luckily I saw my mistake as I put the cushion (with the armrests) down. Besides my little derp moment, assembly went as smooth as it could have.

Now that I’ve used the chair for the past 2 months, I can say it’s very pleasant to sit on. Almost everything can be adjusted. From the height of the chair, the headrest and the incline of the back, to the tilt of the cushion and back tilt tension. I’m not sure what that last one does, but it’s impressive. They even have a very good instruction video on what you can adjust and how to do it, because I didn’t even mention all the settings you can tinker with.

There is one minor point: the armrests. Like the rest of the chair, it’s very customisable. I can adjust the height and move the armrest itself horizontally in all directions. Here is my biggest annoyance so far: the armrests just slide around. Most adjustments such as moving the armrest or the headrest up and own, happen in stages. I can feel the clicks and stands as I move them. Not so when horizontally positioning the armrests. This is most distracting when I move my arm from my keyboard to my mouse or the other way. Then the armrest can change positions without intending to do it.

The top of the armrest is made of a soft kind of plastic which is nice to the touch, but some fabric or fake leather with a cushion would have been more comfortable. Especially when sitting in the chair for hours during a workday. I think it’s strange they did not use the same material of the cushion to make the armrest more comfortable.

Don’t get me wrong, the armrest is still comfortable and I love the chair. There’s a lot of thought put into this to make it as comfortable as possible. If the good folks at Autonomous add some cushion to the armrest, they’ll have the perfect chair.

Up next week: the monitor arm, again from the good folks over at Autonomous.

My home office – the desk

Since I’ve moved into a new house about a few months ago, I’ve upgraded my office quite a bit. Since writing about your office setup is such a big hit ever since Covid-19, I decided to add mine as well. I know I’m a bit late to the party, but then again, I only recently got all the parts in. So lets start with the standing desk.

Disclaimer: I’ve bought all my gear myself with my own, hard earned, money. So no sponsorship here! Which is why I can give you the good and the bad. I’m going to start with the new desk setup: a standing desk, ergonomic chair and monitor stand, all ordered from Autonomous. I ordered these parts 11th of June 2020.

My standing desk

The first item to arrive is the SmartDesk 2 Premium. Actually, that’s not true, the cable trays arrived after 2 weeks or so, but without the table, they are pretty much useless. Combined with the table though, they are really convenient. The tables arrived about a month after I ordered them.

A SmartDesk arrives in two packages. One is the table top, the other are the legs, the electronic buttons and all the screws. Putting it all together is very easy. The instructions are clear and easy to follow. Except a Philips head screwdriver, all tools are supplied.

Assembly is almost a one man job. It starts with attaching the mechanical legs to an iron frame. Then comes a bit tricky part of attaching the frame to the table top. It’s a bit trial and error to get it aligned with the pre-drilled holes, but it’s not that hard. Then comes the part where I needed help: flipping the table. Because the legs are mechanical, they weigh quite a bit and the large surface of the table makes it unruly to grab. I think I could manage to do this on my own, but it was so much easier when my wife gave me a hand.

The cable trays can very easily be attached when the table is still upside down. I’m very happy that I ordered them as they are a great place to store excess cables, power strips and laptop chargers that don’t need to move every week.

For even more cable management, there are some zip ties with a sticky edge so I can attach the cables from the buttons to the underside of the table without them hanging in the way or taking up space in the cable trays. The only downside is that after about a month, the glue on my wife’s desk gave out. It’s easily fixed with some super glue, but it’s annoying to do after the table is in the upright position.

Left: the loose cables – Right: the fixed cables
Drag to see more of the pictures

Attaching the power cable for the legs and cables for the buttons is just plug and play. They look like pc power cable plugs, so they only attach one way. The console has up and down buttons, 4 numeric buttons (1 through 4) and an M button. After plugging in the table for the first time, I have to press the up and down button at the same time to reset the height. If there is ever a problem, this is how the table resets. Setting the table at the right height, is very easy. I just press the up and down buttons until it has the right height.

Programming the height is a bit strange and I did have to look it up. I first tried to press the number for a few seconds, but that did nothing. The correct way is to press the M button for several seconds until the height on the display starts blinking. Then I press the number I want to program. So I set the table at sitting height, pressed the M button until the display starts blinking and then pressed 1. Now button 1 is set to my sitting height. I did the same, but for standing height, for button 2. Now that these 2 heights have been programmed, it’s a breeze to operate.

The materials of the table are sturdy and have a high quality feel to it. I think this table will serve me well in the years to come. There are a lot of small, really nice touches. The two that stand out for me are the two holes drilled into the table for cable management and rounded edges that are really comfortable when resting my arms.

Next week I’ll write about my thoughts on the ergonomic chair that I ordered.

Splitting IMediator interface

For a past project, I used the awesome MediatR package to send messages throughout the system. Because I used the pipeline for some compute heavy checks, it was not wise to send a request from a request handler. That is why I split the functionality for sending requests and notifications or events.

When a message entered the system, mostly via a REST endpoint, it got dispatched through MediatR to the corresponding handler. The message travels through the pipeline where some logging was done, some validity checks were performed and sometimes, there were even some security checks (e.g. “can this user access this data”). In this application, the pipeline is not the most lightweight part.

For some actions, I wanted to reuse logic from other handlers. Each handler has a single responsibility and I’d be reusing code, good decision in my opinion. Unfortunately, this triggered the pipeline each time which was not necessary at that point. I quickly saw that this significantly slowed the application. That is why the team and I decided to never send requests from within request handlers.

All said and done, we refactored this pattern of sending requests within handlers to simple service calls. That way, the reused request handler was nothing more than a facade in front of the service being called.

Notifications were also being used throughout the system to notify other parts when certain events happened. This would mean that the IMediator interface was passed into a significant number of handlers so they could publish these notifications (or events if you like that term better).

This also meant, that the team has easy access to the send request functionality. Now being the diligent programmers that we all are, I (and other team members, especially the newer ones) never succumbed to the temptation of cutting corners. So we always refactored the second handler into a service and called the functionality via the service. Or maybe not always…

Because that send request is just so easy to (mis)use, it still happened more than I would’ve liked. We all knew that not refactoring would just come to bite us later. From time to time, for whatever reason (pressure, tired, deadlines, new team member,…), it happened again.

That’s when I created a specific interface for sending events through the system. I created an implementation that used the MediatR library. This allowed us to use the MediatR publishing mechanism, without exposing the send request functionality.

public interface IPublisher
{
  Task Publish<TNotification>(TNotification notification, CancellationToken cancellationToken = default)
    where TNotification : INotification;
}

public class MediatrPublisher : IPublisher
{
  private IMediator _mediator;
  public MediatrPublisher(IMediator mediator) => _mediator = mediator;
  public Task Publish<TNotification>(TNotification notification, CancellationToken token = default) => _mediator.Publish(notification, token);
}

Because I really like what Jimmy Bogard (@jbogard, the creator of the MediatR package) does, I’ve recently submitted a PR to get this into the MediatR package. We all know it’s much better to rely on somebody else’s interface than to create our own (who noticed the sarcasm dripping from this sentence?).

In all seriousness, I think it will benefit the MediatR package to separate these concerns. That is why I’ve created two new interfaces: IPublisher and ISender. These contain the Send and Publish methods that resided in the IMediator interface. Because not everybody wants to switch to these specialised interfaces, I left the IMediator interface in place and have that inherit from the new ones.

public interface IPublisher
{
  Task Publish(object notification, CancellationToken cancellationToken = default);
  Task Publish<TNotification>(TNotification notification, CancellationToken cancellationToken = default)
    where TNotification : INotification;
}

public interface ISender
{
  Task<TResponse> Send<TResponse>(IRequest<TResponse> request, CancellationToken cancellationToken = default);
  Task<object?> Send(object request, CancellationToken cancellationToken = default);
}

public interface IMediator : ISender, IPublisher { }

I’m a big fan of Jimmy’s work and I hope that with this change, I’ve helped improve the quality of life for a number of programmers, including mine. I’m not sure when this will be available in the MediatR package, but I hope soon.

Consuming a C++ DLL in dotnet core

For a client, I need to open a drawing in a specialised program. When the user closes the program it should upload the drawing to a server. What looks like a straightforward start of a process, turned into a mindbogglingly number of failures. Journey with me through all the steps I took to get this seemingly simple task to work.

After a quick Process.Start('program', 'path/to/drawing') didn’t work, I knew something was wrong. So I contacted the developers of the software and asked for clarification. They sent over very helpful documentation and I was on my merry way again.

Apparently, they have a .dll that is included with their program and that is how an external application can communicate with it. Phew, not so hard after all. So I modelled the external calls to methods and decorated them with the [DllImport] attribute.

private delegate int Callback(int code, string message);
[DllImport("path/to/dll", CharSet = CharSet.Unicode, SetLastError = true, CallingConvention = CallingConvention.Cdecl)]
private external static int RegisterCallback(Callback method);

[DllImport(/*see previous*/)]
private external static int StartApplication();

[DllImport(/*see previous*/)]
private external static int DoWork(string command);

In this flow, which is apparently more common in C++ than dotnet, I first register a callback method, start the application and then send work to it. The registered callback method handles status updates and other information that gets sent back. This way, a single marshalling needs to be done and multiple DoWork methods can be called, even different external functions, and they’ll all report back to the registered callback method.

The code parameter in the callback method determines what will be in the message and how to interpret it. For example, the code 0 can refer to application initialisation and the message then is “SUCCESS” or “FAILURE: with a specific reason”. More complex information can be sent back as well. For example, code 2 could be that the application has saved the drawing, the message then contains XML with the location of the drawing and some meta information about the changes that were made.

Although this is quite an interesting approach, as most examples of interop use a specific callback in each function. This approach however, does pose a few interesting challenges. I found that the callback was never invoked. To get more error information, I had to enable SetLastError = true. This then allows me to call Marshal.GetLastWin32Error(). That function returns a number that I can then look up in the official Microsoft docs.

Unfortunately, I forgot the error so I can’t reference it here. It did lead me to an article to set the CallingConvention to Cdecl. This is to indicate that I am responsible to clean the stack and not the code that I’m calling. This has to do with defaults in both runtimes, C++ uses __cdecl convention while the default for C# is __stdcall. The conventions need to align to allow the processes to talk to each other.

Huzzah, done… No wait, too soon, still no callback. The application that I am calling is starting, but it also gets cleaned up too fast. I’m not sure what C++ techniques are used, but either Windows or the GC are cleaning up the application before it could properly start.

What I did notice is that the StartApplication() method returns almost immediately and I have to build a way to wait for the callback that says that the full initialisation of the application is done. I assume that the StartApplication method sets a process in motion and reports that the process kicked off correctly. Dotnet on the other hand figures that the invoked method was done and doesn’t need all those resources anymore, including the application that is halfway through its startup process.

If anybody is interested in the wait mechanic that I tried here: I used a TaskCompletionSource<bool> to return whether the initialisation would have completed succesfully based on the message the callback would receive. This way, I could use await to wait for the external program to finish.

Here’s where the developers of that program gave me a helping hand: they told me dotnet needs to keep a reference to the method that is being called and the way to do that is to use functions from the Kernel32 library to load the functions into memory and only release them when they are done.

[DllImport("kernel32.dll", SetLastError = true)]
private static extern IntPtr LoadLibrary(string pathToLibrary);

[DllImport("kernel32.dll", SetLastError = true)]
private static extern IntPtr GetProcAddress(IntPtr libraryReference, string procedureName);

[DllImport("kernel32.dll")]
private static extern bool FreeLibrary(IntPtr libraryReference);

So instead of using a DllImport, I have to do this by hand.

The LoadLibrary function is used to load the library into the running application. The IntPtr that is returned is a pointer to a memory address where the library is loaded. The GetProcAddress function takes the pointer returned by LoadLibrary and the name of the exposed method. When the library has fulfilled it’s purpose, so after the DoWork function is done and the callback method has received the done signal, I need to clean up the references to the library by feeding the pointer to the library to the FreeLibrary function.

It’s comparable to how DllImport works behind the scenes. Except that DllImport is applied to a method that can be called. The GetProcAddress only returns another pointer to where the function I want to call is loaded into memory. To convert the pointer to an callable function, I need to marshal it. In full dotnet framework, there is a static function Marshal.GetDelegateForFunctionPointer which takes the IntPtr and the Type of a delegate. It then returns an object that can be cast to the delegate. If you are using dotnet core, there is a generic function Marshal.GetDelegateForFunctionPointer<T>.

internal class CustomApplicationIntegration : IDisposable
{
  // kernel32 imports here
  private readonly IntPtr _dllPointer;
  public CustomApplicationIntegration(string pathToLibrary)
  {
    _dllPointer = LoadLibrary(pathToLibrary);
  }

  public void Dispose()
  {
    if (_dllPointer != IntPtr.Zero)
      FreeLibrary(_dllPointer);
  }

  private delegate void Callback(int code, string message);
  private Callback _keepAlive;
  private bool _applicationDone = false;
  private delegate int RegisterCallback(Callback method);
  private delegate int StartApplicationDelegate();
  public void Interact()
  {
    _keepAlive = CallbackMethod;
    var registerPointer = GetProcAddress(_dllPointer, "RegisterCallback");
    if (registerPointer == IntPtr.Zero)
      throw new Exception("Something went wrong!");
    // dotnet 4.X aka full framework
    var registerCallback = (RegisterCallback) Marshal.GetDelegateForFunctionPointer(registerPointer, typeof(RegisterCallback));
    var registerResult = registerCallback(_keepAlive);
    // check that registerResult returns ok

    var startPointer = GetProcAddresss(_dllPointer, "StartApplication");
    // dotnet core
    var startApplication = Marshal.GetDelegateForFunctionPointer<StartApplicationDelegate>(startPointer);

    var startResult = startApplication();
    // check that startResult returns ok
    while (!_applicationDone)
      Thread.Sleep(100);
  }

  private static void CallbackMethod(int code, string message)
  {
    // handle callback
    _applicationDone = code == StopCode;
  }
}

If something goes wrong in either the LoadLibrary or GetProcAddress, they return the value IntPtr.Zero. That’s how I can check for success, everything that is not the zero value is a valid pointer. Because the SetLastError is set on the DllImports of these functions, I can get the id of the error with the Marshal.GetLastWin32Error function, just like I did earlier.

At the end of the Interact method, there is a loop to keep waiting on the external application to send the stop code. Only when that code is received, can I continue with my application. There are functions in the library to do specific tasks and to stop the external application, but I’m omitting them here for brevity.

In the actual app, there are several loops waiting for the external application to finish some work or signal some status changes. For example, I have to wait for the application to start, before I can assign any work to it.

Now lets run this thing and see… still… no… callbacks… What I can see, is that the application is still exiting, but now it takes longer. The startup logs indicate that the library is doing it’s job correctly, but that the process that is started by the library is being terminated. Windows, stop killing my processes!

The Task.Run is to blame here! I run the Interact function in a Task and it runs a while longer, but it doesn’t protect the process the library is starting. When I changed this to a Thread, it solved the matter. To be honest, I have no idea why the Task terminates early, while the Thread runs as intended. If anybody knows why this is, do get in touch.

// bad way
await Thread.Run(() =>
{
  var integration = new CustomApplicationIntegration();
  integration.Interact();
}

// the good way
var workThread = new Thread(() =>
{
  var integration = new CustomApplicationIntegration();
  integration.Interact();
});
workThread.Start();

I hope this runs, I hope this runs and… I wait forever on the callback function. Don’t give up hope, this is the last problem to solve, but what a problem it is. The issue is that I created a deadlock: the external application invokes the callback and waits for it to return (eventually it would crash after a looong time) and my code waits for the callback but is too busy waiting to notice the callback.

Because it’s a console application, there is no main event loop. The console expects there to be only one way through the application. It can wait on async methods, but it does not expect any application events. What this means is that it does not see the event of the callback, because it does not originate from within the application.

Simple callbacks that happen within the execution of the called C++ code expect the callback method to be invoked. Because I registered the callback and then went on to other functions that do not have that callback explicitly referenced, the console is not ready for these invocations. Instead, the Windows event loop is used. Since the console app does not have a main event loop, they simple do not get processed. This is the same reason why Timers and Sockets do not work as expected in console applications.

The most simple solution to this problem is to convert the console app to a WinForms app and call Application.DoEvents() in the while loop in the CustomApplicationIntegration class.

while (!_applicationDone)
{
  Thread.Sleep(100);
  Application.DoEvents();
}

To me, it feels a bit hacky. That is because in a normal WinForms app, the Application.DoEvents() method gets handled by the framework. It has a number of guards to handle concurrency when multiple events fire in quick succession. The official Microsoft documentation seems to discourage manually waiting on Application.DoEvents. Since I’m only expecting events from one external source, there should be no problems. Should…

Wow, that was a lot to take in. From DllImports, over marshalling functions and waiting for an asynchronous response to finally transforming the console app to a WinForms app so Windows events could be handled. Somebody has earned their favourite beverage, so go on and treat yourself to a nice beer, wine or two fingers of whiskey. I know I’m going to enjoy all myself now.

Specflow GenerateFeatureFileCodeBehindTask error

In the past few months I have been contacted by a number of readers who notified me that the code for my A simple SpecFlow 3 setup in Rider blog post was broken. I took me a while, but I’ve finally taken the time to fix it.

Some time ago I already looked at it and I saw that with the upgrade to dotnet core 3 there was a breaking change with SpecFlow .feature.cs file generation. Due to a change in the MSBuild pipeline, the files were not generating correctly. The following error was shown during the build of the solution.

The “GenerateFeatureFileCodeBehindTask” task failed unexpectedly.

MSBuild

Fortunately, upgrading to the latest version of all the packages fixed all the errors. Thank you SpecFlow team!

The SpecFlowSetup project on GitHub has been updated and should work again.

Set up MTA-STS on a GSuite hosted GitHub pages

To further protect my email communication, I have enabled MTA-STS on my GSuite domain. My site is hosted on GitHub pages, so I’ll walk you through my setup.

It starts with creating a new GitHub repository that will hold the files for the MTA-STS subdomain. For some reason, the config for the MTA-STS is read from an mta-sts.txt file, located in the .well-known folder, but it has to be loaded from the mta-sts subdomain. Why it can’t be done from the main domain is beyond me, but here we are.

Now that I have a repository, I create the .well-known folder and I place the mta-sts.txt file inside that folder. The content of the file can be found in my GSuite Admin section. It is the middle value: MTA-STS Policy Diagnostic. I’ll come back to the other values shortly.

Unfortunately, this is where I bumped into the problem with hosting on GitHub pages. By default, it does not expose folders starting with a . (dot). Probably because the servers are Linux based and any Linux folder starting with a dot, is automatically a hidden folder. So Stack Overflow to the rescue!

The fix is as easy as adding a _config.yml file to the base repository with the single line:

include: [".well-known"]

Important detail: do not end with an empty line! Just add that single line to the file to expose the .well-known folder.

The last step in GitHub is to set up the custom domain for this repository. It’s pretty easy to set up a GitHub pages domain, just be sure to include the subdomain before your domain.

Don’t worry if GitHub displays an error, I have not set up the subdomain DNS yet, so it can’t find the setup for the domain just yet.

I’ll fix that right now. I let Cloudflare handle my DNS settings. In the DNS settings of the dashboard, I add 4 A records with the name of mta-sts, one for each IP-address that GitHub pages can handle. For more information about the specific setup of GitHub pages, I refer to their good documentation. Now that the IP redirects are set up, the subdomain should be ready and available.

Two more steps and I’m done. Luckily for me, they are both in my DNS setup. I add a TXT record with the name _mta-sts and the value found in my GSuite Dashboard after “MTA-STS TXT Record Diagnostic”. I add another TXT record with the name _smtp._tls and the value found in my GSuite Dashboard after “Reporting Policy Diagnostic”.

Do not forget to change the rua=mailto: value of the “Reporting Policy Diagnostic” text to an email address which you can receive. That is where reports will be sent to. In the near future, Report URI should get support to process the values.

Now I enjoy more secure email communication. If anybody wants to learn more about SMTP MTA Strict Transport Security, I recommend reading Scott Helme’s very good blog post or URI Ports expanded blog post. That’s where I learned about it.

Edit: Thanks Faisal from emailsecuritygeek.com for pointing out a typo. Cheers mate!

Scammers used my email as a spam address

On the 7th of November 2019, I received an email from AliExpress that told me that I created an account with them. Seeing as I didn’t do this, at first I thought it was a scam. My email address contains a dot between my first and last name and that was missing. So I did what I do with all spam, I ignored it.

A few weeks later, on November 25th, I received a notification that I had a shopping cart with items in it. I decided to go to the AliExpress website and do a password reset on “my” account. Surprisingly, I had not received spam and a few moments later, I was the proud owner of an AliExpress account.

The first thing I did was check out my shopping cart. I did not take a precise inventory at the time, I just deleted the few items that were in it. It did prompt me to look into my already purchased items. There was a range of strange choices from plastic apples for table decoration to knockoff Disney dolls. The one thing they all had in common was that they cost under 20 euros, thus skipping most customs controls. So the buyers evade sales tax, limit checks on the knockoff goods and get a higher chance the goods will get delivered.

When I looked at the account details, I saw a fake name with Bonny as the first name and a bogus shipping address in France. It was entered half a dozen times, so I concluded I was dealing with a master criminal that knew how to efficiently navigate the site.

I looked the address up on Google Maps and it turned out to be a corn field. I’ve always wondered how they deliver to such places. The delivery guy shows up in a truck with the stuff in the back and then what? Is there a shady guy with a nondescript white van ready to take the goods? I guess I’ll never know.

Back to the order history. All in all, there were 28 items bought on “my” account. When I saw that, I blamed AliExpress for not verifying the account before accepting orders. I received a welcome mail, but I never had to verify that my account is controlled by me. So there are probably countless unverified accounts that are used by scammers to buy counterfeit goods. That means that AliExpress is profiting from, what are in my opinion, fraudsters.

Until I checked the orders more closely. Apparently 20 out of the 28 orders haven’t been paid yet. That means that over 70% of the orders haven’t been paid 18 days after they were shipped. Somehow, I doubt that they will ever be paid, even if I did not take back the account. Which means that both AliExpress and the third party sellers are missing out on revenue.

All this scammer needs to do is create another fake account and buy as much goods as he can before the account is suspended. They can keep doing this as long as accounts are not verified as there is a treasure trove of emails out there for anybody who knows where to look. And it’s not exactly hard to find even if you don’t know where to look.

So I don’t know why AliExpress is not verifying accounts. It’s costing them money. It’s costing their subcontractors money. It’s costing European countries taxes. They are basically enabling scammers. The only thing they’d need to do to stop these thieves, is verify an account before that account can be used to buy goods.

At no point was my email compromised. They just used my email address to sign up. Thanks to a combination of a password manager (shameless plug for 1Password) and a strong second factor (shameless plug for YubiKey security keys), scammers will be hard pressed to get into my most valuable accounts. For full transparancy, I’m not sponsored by either vendor, I bought these products myself. I’m a big fan of them.

And as a last item, just to be thorough: I did not report this to the police. I do not feel that the information I have to share will make a compelling case against anybody. So instead of adding more white noise to the pile of noise the police already has to deal with, I’m going to ignore this.

What I do want to shine a light on, is that we cannot let scammers just use our emails for their fake accounts. So if I receive an email that I created an account somewhere, especially online shops, will get a closer look to see if it’s an actual welcome mail or a scam in itself.