I have some low-cost Chinese arbitrary waveform generators for analog design/test work below ~25MHz. They are not lab grade, but they are inexpensive and offer a rich feature set if you can live with their warts. If you’re not familiar with waveform generators, this is a good introduction.
This post examines an upgrade to the Siglent SDG1025 (see: datasheet, user manual), service manual). One of the main shortcomings of this generator is that it uses a basic crystal oscillator as its frequency standard which delivers around 10ppm of frequency accuracy at room temperature and might be as bad as 50ppm. While that’s fine for many applications, Siglent includes a spot on the main board for a TCXO so you can upgrade this to 0.1ppm accuracy for under $20. You need to remove the crystal and add the TCXO and one jumper wire; see how-to links: here and here.
After installing the TCXO, I tested the SDG1025 accuracy using its frequency counter feature to measure a signal from a Marconi 2025 RF signal generator which was slaved to a rubidium frequency standard (in my Anritsu spectrum analyzer). A few things to note:
The frequency accuracy was improved to 0.1ppm as advertised
The frequency counter input requires a strong signal: with +10dBm input, it was accurate only to 12.5MHz; with +13dBm input (the max my RF sig gen can output), it could count accurately through 27.5MHz. The manufacturer specifies it to 200MHz.
Of the two low-cost waveform generators I own (Rigol DG1022 and Siglent SDG1025), I prefer the Rigol, but both are useful toolsand the TCXO hack certainly improves the SDG1025 frequency accuracy.
I have been waiting for a long time for a usable low-cost spectrum analyzer and it looks like that wait is over
Update Jan 2023: I sold the original TinySA and upgraded to a TinySA Ultra. It is more expensive (about $140), but much more capable. The Ultra provides an extended frequency range up to 6GHz (level-calibrated to 5.3GHz) and adds several nice features including a remarkably functional bonus signal generator (cw, am, fm) and a 4″ LCD screen (welcome eye-relief for folks with high-mileage). This is not a toy; it has its limitations, but the specs now make it quite usable (PN=-92dBc/Hz, 2dB absolute amplitude accuracy, 1ppm frequency standard, 200Hz min RBW). So you can observe signals with reasonable accuracy and good resolution.
As before, I bought from R&L Electronics (the authorized US outlet – be careful to avoid poor-performing clones). I’ll do some testing with it soon, but you can see a few quick tests below.
The front-end is better but still easily over-driven (and damaged above +10dBm) so I usually use it with a 20dB fixed attenuator in line. As an example of how it compares to higher-end gear, here’s a 915MHz signal FM modulated at 5kHz shown (frozen) on the TinySA Ultra and on a Siglent SVA1015X. Peak measurements are within 1.5dB and 100Hz. Not bad!
I was recently looking for RF interference in the WiFi band and ran the tinySA Ultra and a (much more expensive) Anritsu BTSMaster for a bit with both in max hold mode to get a feel for what was out there and I was pleased with the results.The pictures won’t match precisely due to the nature of sweep times and max hold, but the results were quite good and immediately showed the strong channel 1 signal with fairly good accuracy on both frequency and amplitude. I’m quite fond of this little SA.
There is also a TinySA app in development; it lets you connect to the TinySA from your Windows PC and capture traces; it’s a little rough at present and the sweep rate gets very slow when you are capturing to PC at low spans (in part because the default is 1k points), however it’s still nice to be able to capture traces directly. Here’s an example of a 433MHz signal FM modulated at 1.5kHz
OLDER VERSION (note: below is about the older TinySA, not the Ultra)
I bought a TinySA spectrum analyzer for $55 from R&L Electronics (one of the official dealers). If you get one, try to use the official sites; there are lots of bad clones out there. The TinySA is, as the name implies, a tiny, battery-powered, touch-screen, 100kHz – 960MHz spectrum analyzer. I have been very pleasantly surprised by its performance.
It has its limitations, but it is the first usable low-cost SA I’ve found for looking at sub-GHz FM signals. There are plenty of low-cost toys out there (see my earlier post regarding the “Simple Spectrum Analyzer”) that claim to do spectrum analysis, but they have always had fatal flaws (inaccurate or horribly coarse resolution bandwidth). This one has quite good frequency and amplitude accuracy out of the box and a usable 3kHz RBW.
It has some limitations too (But for goodness sakes, we’re talking about a pocket-sized spectrum analyzer for $55!!!):
960MHz max frequency
3kHz resolution bandwidth (RBW)
Amplitude uncertainty seems to be around 2dB
Easily over-driven (claims +10dBm max input, but works much better below -10dBm)
Below are some pictures showing the SA in action, including both strengths and limitations.
For comparison, here is what the signal generator output looks like on a higher-end piece of test gear (Anritsu MS8609A):
I do a lot of work in the 902-928MHz ISM band which is often impacted by strong nearby cellular signals. I looked at the 50MHz span around 915MHz using both a TinySA and an Anritsu CellMaster. The TinySA did an outstanding job, showing the same results but with much greater dynamic range (the Anritsu is meant as a cell tower service tool and supports looking at much stronger signals). On both analyzers, you can see a strong LTE signal at 892.5MHz. Measurements were using a Linx sleeved dipole centered at 915MHz.
TinySA showing LTE signal at ~892MHzSame LTE signal on my Anritsu MT8212B
I bought an over the range GE microwave (model JVM3160) oven less than 4 years ago. It was a bear to install, so I was pretty disappointed this week when it started blowing the circuit breaker each time I tried to use it. I really didn’t want to have to install another oven. Fortunately, it turned out to be a defective door-closure switch and an infuriatingly bad design.
First, the design problem: microwave ovens have small electronic switches to detect whether the door is closed and prevent the oven from turning on if it is not (for safety reasons). Unbelievably, GE decided that blowing the fuse is a good way to prevent the oven from turning on. While that is technically true (it does stop the microwave), it’s a ridiculous way to do it since it gives no indication of what’s actually wrong and could create a much greater hazard if the home’s wiring, fuse, or circuit breaker were inadequate. Combine this with use of cheap switches that fail within a few years and I question whether I should ever buy another GE product.
Thank goodness for the internet because this turns out to be a common problem and others had figured it out. After disassembling the front panel (something you can do easily without removing the oven from the wall), I found that one of the three door detect micro-switches was indeed not working. I removed, repaired, and replaced the switch and the microwave works. I suspect the switch will fail again, but replacements are inexpensive and widely available so next time I’ll know what to do.
If you have a GE microwave that’s blowing fuses, and are comfortable with basic electronics, this youtube video explains clearly how to access the microwave’s electronics without removing the oven from the wall, how to test the switches, and how to remove the switch assembly for repair.
Update: the original switch did indeed fail again after a few months, so I replaced it with a new switch purchased on amazon here and the replacement has worked like a champ ever since.
Disclaimer: It should go without saying, but microwave ovens use high voltages; you should never open or work on any appliance without unplugging it and knowing what you are doing. Although it is not exposed in this repair, microwave ovens also contain a high voltage capacitor that retains a dangerous charge even when the oven is unplugged. If you don’t know what you’re doing, play it safe and call a repair professional.
Virtual machines (VMs) can be handy for all sorts of reasons; for example, I may need a machine running Windows 7 to support legacy development tools. I can create a VM running Windows 7 on my host machine that’s running Windows 10. Windows 10 comes with Hyper-V (similar to VirtualBox and other virtualization environments) which lets you do this quickly and easily.
However for embedded development, the virtual machine needs to be able to talk to the target and unfortunately, Hyper-V doesn’t directly support mapping virtual COM ports on the VM to physical COM ports on the host machine. Fortunately, Tim Howard wrote a nifty open source utility: COMpipe that lets you do this pretty easily.
To use it, in Hyper-V Manager, select your VM, choose Settings, choose COM1, then named pipe and give the pipe a name (e.g. wm3com10).
Then launch COMpipe on the host machine in a Command window (needs to have administrator privilege), specifying the physical COM port and the named pipe. For example:
At that point, you should be able to open COM1 in your VM using your favorite serial terminal software (TeraTerm, PuTTY, whatever) and communicate with the device that’s actually connected to COM10 on your host machine. Thanks Tim!!!
When I develop an embedded system, one of the first things I implement is a command-line interface (CLI) which is invaluable for unit testing. It doesn’t take a lot of code to make a fairly sophisticated interface that allows you to type commands and get responses just like a unix or windows/dos command shell.
When I am developing in EmBitz (my favorite microcontroller IDE), for STM32 microcontrollers (my favorite microcontrollers), I use an STLinkV2 (or clone) to allow me to load software into the target and debug it easily. EmBitz provides a wonderful additional feature called EBMonitor that hooks _write() and allows you to redirect standard input and standard output (e.g. printf) over the STLinkV2 and display it in a console window within the development environment. This means you don’t need a serial connection to your target to access the console. See my previous post for more information on using EBMonitor.
However, you often want the CLI to be available for non-developers (e.g. users) using a serial connection via a USB-to-TTL dongle or a USB-to-TTL serial converter built into your target such as the CH340G or HT42B534 into the target. Creating a serial UART console is easy too; you just need to implement your own _read() and _write() functions that usually look something like this:
#ifndef USE_EBMONITOR
// Function called by stdio to send output to stdout
int _write( int fd, char* buf, int len )
{
int i;
for(i=0; i<len; i++) {
uart_putchar(buf[i]);
}
return i ;
}
// Function called by stdio to read data from stdin
int _read(int const fd, char* buf, unsigned buf_size)
{
int nRead = 0;
do {
int ch = uart_get_char();
if ((ch != EOF) && buf) {
buf[nRead++] = ch;
}
} while ((ch != EOF) && (nRead < buf_size));
return nRead ? nRead : EOF;
}
#endif
and uart_getchar() and uart_putchar() are functions that read/write a character from/to the UART…trivial for polled output or a little more complicated if you want it interrupt-driven (which you do). Once you’ve written this, then you can just #include <stdio.h> in your other modules and use printf() for formatted I/O.
Notice the use of the #ifndef USE_EBMONITOR to wrap _write(). I do this so I can use EBMonitor for debug builds and UART for release builds. EmBitz supports two targets by default: Debug and Release. For the Debug target, I define USE_EBMONITOR under: Project -> Build Options -> Compiler Settings -> #defines
For the Release target I don’t define EBMONITOR:
Writing interrupt driven UART code is beyond the scope of this post, but there are loads of examples and tutorials online. When implementing a CLI you’ll probably want to do some processing of characters as they are received in the ISR. Typically, you’ll store them in a command buffer and then set a flag (e.g. cmd_ready) when a carriage return is received to indicate that there is a command ready to be processed (don’t process commands in interrupt time; just poll the flag in your main loop and clear it after processing the command).
I usually have a command interpreter module that creates a linked-list of commands and their associated functions. The structure of a command looks like this:
/// Commands are stored as linked lists
typedef struct cmd_s {
char *nameP; // command name - string to match
void (*fnP)(); // function to execute if string matched
struct cmd_s *nextP; // link to next command in this list
} Command;
The command interpreter code then has only a few EBMonitor-specific portions like those below (and most of those are just for efficiency):
void command_init(void) {
#ifdef USE_EBMONITOR
// UART1 is normally used for console I/O, but
// EBLink GDB Server supports console I/O via STLink debug interface
// so we don't have to use the UART for debugging. printf output
// is buffered until \r\n or fflush(stdout) and then displayed in EB monitor
// input is read from stdin (scanf, fgets, etc.)
void EBmonitor_buffer(FILE* , char*, uint16_t);
#define EBM_outLength 128 // EB Monitor is used for debugging
#define EBM_inLength 64
static char EBM_out[EBM_outLength];
static char EBM_in[EBM_inLength];
// Route console I/O over the STLink debug interface
EBmonitor_buffer(stdout, EBM_out, EBM_outLength);
EBmonitor_buffer(stdin, EBM_in, EBM_inLength);
#endif
// Turn off buffers, so I/O occurs immediately
setvbuf(stdin, NULL, _IONBF, 0);
setvbuf(stdout, NULL, _IONBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);
}
The rest of the command interpreter is the same between UART and STLinkV2 interfaces. For example:
/// Top-level command list
Command *commandsP;
/// Flag indicating a command is ready to be processed
unsigned char cmd_ready;
/// Buffer for current command being entered
#define MAX_CMD_LEN 80
static char cmd[MAX_CMD_LEN];
/// Prompt user for a command
void command_prompt(void) {
printf("ready>");
fflush(stdout);
}
/// @returns true if a command is ready for processing
int command_ready(void)
{
return !!cmd_ready;
}
/// Add a command to the head of the commands list
void command_add(Command **listP, Command *cmdP) {
if (cmdP && listP) {
cmdP->nextP = *listP;
*listP = cmdP;
}
}
/// Display commands available in the specified command list
static
void list_commands(Command *listP) {
printf("Commands: ");
while (listP) {
printf("%s ", listP->nameP);
listP = listP->nextP;
}
printf("\r\n");
}
// Call regularly from your main loop
void command_process(void)
{
static int len; // length of current command in buffer
int ch = getchar();
if (ch != EOF) {
// drop received characters while waiting to process last command
if (cmd_ready) return;
if ((ch == '\r') || (ch== '\n')) {
putchar('\r');
putchar('\n');
if (len) {
cmd[len] = 0; // null terminate current command
cmd_ready = 1;
len = 0;
} else {
command_prompt();
}
} else if (ch == '\b') {
if (len) {
len--;
putchar(ch);
putchar(' ');
putchar(ch);
} else {
putchar('\a'); // sound beep
}
} else if ((len+1 < MAX_CMD_LEN)) {
cmd[len++] = ch;
putchar(ch);
} else {
putchar('\a'); // sound beep
}
}
if (cmd_ready) {
char *command = strtok(cmd, " \r\n"); // extract first command token
command_execute(commandsP, command);
command_prompt();
cmd_ready = 0;
}
}
/// Search list of commands for specified command and execute if found
void command_execute(Command *listP, char *command) {
// search list of commands and execute matching command if found
Command *cmdP = listP;
while (command && cmdP) {
if (strcmp(command, cmdP->nameP) == 0) {
// command found so execute associated function
cmdP->fnP();
return;
}
cmdP = cmdP->nextP;
}
// command not found, show user the command options
list_commands(listP);
cmd_ready = 0;
}
When developing for embedded platforms, one of the first things I usually implement is a command-line-interface (CLI) which is invaluable for debugging. Traditionally, I use a UART, but when developing for STM32 targets using my favorite IDE (EmBitz), there’s another option that uses only the STLinkV2 debug probe and so doesn’t consume a UART (or require additional connections to the target). This option is EB monitor which uses the EMBitz GDB server replacement EBlink and works very well.
The documentation for EBmonitor is a bit thin, so here’s an example of how it is used:
#include <stdio.h>
// EBLink GDB Server supports console I/O via STLink debug interface
// so we don't have to use the UART for debugging. printf output
// is buffered until \r\n or fflush(stdout) and then displayed in EB monitor
// input is read from stdin (scanf, fgets, etc.)
void EBmonitor_buffer(FILE* , char*, uint16_t);
#define EBM_outLength 128 // EB Monitor is used for debugging
#define EBM_inLength 80
char EBM_out[EBM_outLength];
char EBM_in[EBM_inLength];
void cli_init() {
// Route console I/O over the STLink debug interface
EBmonitor_buffer(stdout, EBM_out, EBM_outLength);
EBmonitor_buffer(stdin, EBM_in, EBM_inLength);
}
void sample_fn_using_cli() {
printf("prompt>");
fflush(stdout); // don't wait for EOL, send prompt now
char in_buf[80];
if (fgets(in_buf, sizeof(in_buf), stdin) {
// process the user input
char *token = strtok(NULL, " \r\n");
// process token(s)
}
}
You can use all the standard format specifiers for printf: %u %s %lu… You can use scanf, fgets, or any of the standard file/stream IO functions Pretty sweet.
Your EmBitz project will have to be configured to use EBmonitor including:
Debug->Plugins->EB monitor->Send NEWLINE as EOL on enter
STLinkV2 clone dongles are dirt cheap and beautifully integrated with EmBitz. You just Build and then start debugging. EmBitz uses the STLink to flash the target very fast and then lets you run in a full source-level debugger where you can set breakpoints, stop the processor, step through code, examine variables and memory. Using the same interface for your CLI means you only need the dongle and 4 wires connected to the target (includes power and ground!) to do development.
I am designing an upgraded product around the Espressif ESP32-WROOM module, a successor to the amazing ESP12. The ESP32-WROOM is a ridiculously inexpensive module built around a 32-bit Xtensa ESP32 dual-core CPU (up to 240MHz each) with 520kB of RAM, 4MB of flash, WiFi and Bluetooth transceivers, and gobs of peripherals. They cost under $4 in low quantity which is simply untouchable in the embedded world.
I develop in C++ using VSCode with PlatformIO.
Flashing the ESP32 target PlatformIO has an infamous problem downloading code to the target. There are dozens of posts suggesting how to tweak the timing of the reset and boot sequence to make it work better, but in the end, the problem turns out to be the version of esptoolpy that is specified by default (2.6). By downgrading to 2.1, everything works like a champ. To do this, you just add a platform_packages specification line to your platformio.ini. Mine looks like this:
I’ve been working from home since the pandemic started and it was becoming increasingly clear that my old gen 3 core-i5 processor wasn’t cutting it anymore. So many aspects of the machine were dated that upgrading pieces wasn’t really going to help anymore (8 year old CPU, DDR3 memory, SATA drive, PCIe 2.0, LGA1155). It was particularly problematic when I needed to run VMs.
At Benji’s suggestion, I took a look at Xeon-based workstations and he was, as usual, correct. Xeons are marketed for high end applications: servers and workstations and often cost $1-2K when new (just the CPU). Workstations are feature-rich and easy to maintain because they sell for several thousand dollars to corporate customers. However those customers need to stay on the leading edge, so they upgrade frequently and those machines hit the used market at much lower prices. I try to stay off the bleeding computing edge because gear devalues so quickly. 4-year old high-end gear is often the sweet-spot. So the new home PC consists of:
The machine cost under $500 and offers a good step up in performance (Passmark 18503 vs. 4645) placing it on par with an i9 9900k but at a much lower price and with a lot more headroom. This machine is well engineered (no screwdrivers required for anything), has a single processor with 14 cores, and I can upgrade to two processors with up to 22 cores each. Memory can upgrade to 256GB, the PCIe storage adapter has an open slot and there are two open SATA bays for bulk storage. The NVMe storage is *way* faster than SATA.
Two issues to be aware of:
1. I was a little worried about fan noise because the system only uses 80mm fans. I’m a fanatic about noise because I work in a very quiet room and any fan noise is noticeable. The system isn’t silent, but it’s not bad either. Although I wish Dell had used 120mm fans, the fan speed is adaptive and in normal use is pretty quiet. The system generally idles with the cores drawing under 15W and temperatures just a few degrees above ambient; as I write this, the cores are at 27C (about 80F)). The CPU gets warm when the cores get busy, but cools down quickly and the fans don’t really kick in unless needed. I replaced the thermal paste with Arctic MX4.
2. I’d planned to move my GTX1060 6GB half-length (single fan) card from my old machine. Unfortunately, the T7810 case has a rubber bar on the cover that comes down over the PCIe boards, perhaps to help keep them seated in shipping and that bar comes down right on top of the GTX’s 6-pin PCIe power connector, preventing the case from closing fully. I have another 1060 in the office that’s a bit longer and will try that at some point, but the GT1030 is silent (fully passive cooling), fits, and is good enough for now; it is no match for a 1060, but I’m not a gamer and Fusion 360 is really the only program I use that needs a good GPU.
Overall, I’m pleased with the system and hope to be able to keep using it for at least 4 years before upgrading again.
I used an Atmel ATTiny13A in a project; it only has 8 pins and all are in use. There isn’t much memory (program or data), so debugging can be a challenge…but fortunately, there’s Atmel-ICE to the rescue! The Atmel-ICE (see product page) is inexpensive and can be used to program and debug Atmel SAM (ARM) and AVR targets. To get real value out of it, you need to use it with Atmel Studio: their free IDE built on Visual Studio.
Connector weirdness: The ICE connects to your PC with a micro-USB cable and to your target with a ribbon
cable (ATATMEL-ICE-CABLE). The cable has both a 10-pin 50-mil JTAG connector and a 6-pin 100-mil SPI ICSP/DW connector configured for the standard AVR ICSP pinout. The red wire on the cable denotes JTAG pin 1; when connecting to an AVR ICSP header the red wire connects to pin 3 (SCK). Strangely, the pin 1 indicator on the 6-pin plug does not denote pin 1 but actually connects to pin 5 (nReset). Anyway, you can trust that the 6-pin connector is correctly wired for the ICSP header (SPI/DW) even though the markings suggest otherwise. Assuming your ICSP header has pin 1 (MISO) in the top left, then the key bump on the plug and the red wire will be on the left side. For examples, see this picture showing the connection to an Uno or this closeup showing the connection to a Mega.
Unlike most of the ICSP programmers, the ICE does not supply power to the target; the power pin (2) is just used to detect the target voltage. The green LED near the edge ICE enclosure indicates target power. This is unfortunate as it makes it much less useful as a programmer, but it allows it to work with both 3v3 and 5V targets. (I would have preferred if they had added a target voltage switch and a 3v3 regulator)
You can use the ICE in two connection modes: ICSP (SPI) for flashing the target only or serial debugWIRE (which is where the really neat stuff happens). DebugWIRE is Atmel’s one-wire in-circuit debugging solution using only the reset pin! In this mode, you can connect to your target, set breakpoints, examine variables (sort of), step through code, and do real source-level debugging while still using all of the pins on your target. Kudos to Atmel…this is really slick…and it works! (Note: you have to set the debugWIRE enable fuse on the target – Atmel Studio will prompt you to do this). Don’t forget to clear that fuse when you’re done debugging!
Atmel studio works well; you add your files into the Solution Explorer, build, and start debugging! The only really annoying thing is that the debugger doesn’t let you examine variables that have been optimized away by the compiler (and you usually need to build with the code optimized because there is so little code space)…so if you want to examine variables, you need to make them global.
In my last Home Automation post, I mentioned that I’d changed from a Staples/DLink hub to a Wink hub after Staples/DLink abruptly terminated support for their products. Well Wink also abruptly changed their business model and decided to start charging customers a stiff monthly fee with very short notice and without which the product became unusable. Bait-and-switch is unethical and unacceptable so I threw the Wink hub in the electronics recycling bin and will never buy from them again…I don’t patronize crooks.
My new hub is a Samsung 3rd gen SmartThings Hub ($67 from amazon) and it looks like a winner. It supports Ethernet for secure, reliable connectivity and serves as a hub for Zigbee and Z-Wave devices. Unlike the Wink hub, I’ve been able to pair *all* of my devices reliably (I could never get some of the devices to pair with the Wink hub). Also unlike the Wink hub, it stays connected reliably, and with the SmartThings hub there is no monthly fee. The phone app is slick and works smoothly and so far I am impressed.
Pairing devices that had been previously paired with my Wink hub required first excluding them using the phone app (Hub-> …Menu -> Z-Wave utilities -> Z-Wave exclusion) and then adding them.
My next sensors will definitely be SmartThings; kudos to Samsung!