Major differences between the DS3231 and DS3231M RTC chips

As should be clear from one of my earlier posts, I’m really interested in clocks and precision timekeeping. In particular, I rather like the Dallas Semiconductor DS3231 series of temperature compensated RTC/TCXO (real-time clock/temperature compensated crystal oscillator) modules.

Recently, I had ordered several DS3231 boards from my regular eBay vendor in Shenzhen for some testing, only to find two oddities: first, the factory had evidently gotten an incorrect chip with the same sized 0.300″ SOIC package as the DS3231. This chip was the wholly-incompatible DS1315. It happens, particularly at this price point and via gray market suppliers. No worries, I contacted the seller and they sent me a replacement board.

An eBay-sourced board with a genuine Maxim DS3231M chip, an an Atmel 24C32N EEPROM.
An eBay-sourced board with a genuine Maxim DS3231M chip, an an Atmel 24C32N EEPROM. I lifted the pad desoldering the chip that was previously on it, so I had to add the bodge wire to properly connect the SDA pin. The marking in the top-left of the board is “SBX”, but an awkwardly-placed via makes the board seem a bit more risqué.

The second oddity was that two of the boards I ordered contained a DS3231M chip, which, though seemingly only a different variant of the same chip, is a somewhat different beast than the DS3231 (non-M), in that the non-M variant has a standard 32.768 kHz crystal oscillator built into the chip package and which has its frequency corrected for temperature variations by the chip and its internal temperature sensor. The M variant uses a microelectromechanical system (MEMS) oscillator which is more resistant to vibration and shock than a crystal, but which has a stability of only 5 ppm vs. the 2 ppm of the non-M variant. Other than the choice of time base and stability, there’s two major differences between the chips which I’ll discuss below.

That I received a board with a wrong chip turned out to be fortuitous, since I was able to desolder the incorrect chip (at the cost of lifting one pad, hence the bodge wire in the photo below) and replace it with one of the “free samples” of genuine DS3231Ms I ordered from Maxim (many thanks to Maxim for offering such samples with minimal hassle) without having to waste a working chip. Note that although I placed the order for the free sample in August of 2017, the date code on the genuine chip pictured above is from December of 2011. Obviously Maxim keeps an inventory of chips in storage, presumably to have a buffer for spikes in demand.

The genuine Maxim chips (both the one I soldered to the board, as well as the second one I received but haven’t put on a board) both meet the datasheet specs for 1 Hz stability. So do the eBay-sourced ones. The laser markings on the packages are nearly indistinguishable, and all the eBay-sourced ones appear to be authentic Maxim chips.

After doing some tests with the M variants, two major differences between the two variants became clear. These are detailed in the respective datasheets (available here for the DS3231, and here for the DS3231M), but are a bit subtle and may be, as Dave Jones says, a “trap for young players”. Thus, I felt I should explicitly mention them here.

Here we go:

1. The DS3231 can be used as both an RTC and a TCXO but the DS3231M is only an RTC.

This is actually mentioned in the title of each chip in their datasheets, with the DS3231 described as an “Extremely Accurate I2C-Integrated RTC/TCXO/Crystal”, while the DS3231M is described as a “+/- 5 ppm, I2C Real-Time Clock”.

Why is this so? Let’s take a look at the block diagrams from the datasheet to find out. First, here’s the crystal-based DS3231:

An excerpt from the DS3231 datasheet showing the crystal and temperature compensation blocks, as well as the 32 kHz and INT#/SQW outputs.
An excerpt from the DS3231 datasheet showing the crystal and temperature compensation blocks, as well as the 32 kHz and INT#/SQW outputs.

The crystal, which is integrated into the package itself, is connected to the oscillator circuit and array of capacitors which can be switched in and out to steer the frequency.

The output of the oscillator is sent to the control system that, after reading the temperature from the temperature sensor, determines the number of capacitors to switch into the circuit to compensate for any changes in temperature. A divider in the control system outputs both a 1 Hz signal which is used internally for timekeeping, as well as other signals which are used in the “square-wave buffer; INT#/SQW control” block. Critically, both of these outputs are temperature compensated.

The “square-wave buffer; INT#/SQW control” block has two external outputs, both open drain (and thus requiring external pullup resistors): the 32.768 kHz output and a user-programmable interrupt (e.g. for an alarm) or square wave (which is programmable to produce either 1 Hz, 1.024 kHz, 4.096 kHz, or 8.192 kHz signals) output.

In short, one can use the “32 kHz” pin to measure the actual, temperature compensated frequency of the crystal oscillator, as well as using the “INT#/SQW” pin to output a temperature compensated signal either at 1 Hz or one of several fixed frequencies.

Keep in mind the following caveat from Maxim (personal email from one of their application engineers):

The DS3231SN# is a crystal based RTC whose 32kHz output is temperature compensated. So depending on the temperature we will adjust the internal capacitive load to maintain a consistent 32kHz frequency across temperature. However, the DS3231 series is not intended to be used as a 32kHz reference as the design is strictly focused on creating the most accurate 1Hz signal to drive the RTC.

Seems reasonable to me: the DS3231 is primarily focused on timekeeping, and the 32 kHz output is a nice bonus. Although the 32 kHz output will be continuous (i.e., it won’t skip or add extra pulses), the frequency output is only corrected for temperature variations every 64 seconds and so may drift a bit during that time. When the temperature conversion and correction happens, there may be a sharp, distinct change in frequency. For systems that require smooth changes in frequency, using a DS3231 as a clock source or frequency standard might not be the best option. For other purposes though, it may work reasonably well. I use mine for driving AVR microcontrollers at low speed, and this works fine.

Compare the above diagram with that for the DS3231M:

An excerpt of the DS3231M datasheet showing the block diagram. Note that the 32 kHz output is driven directly from the divider without temperature compensation.
An excerpt of the DS3231M datasheet showing the block diagram. Note that the 32 kHz output is driven directly from the divider without temperature compensation while the INT#/SQW pin can emit a temperature compensated 1 Hz output.

In lieu of a crystal, the DS3231M’s MEMS “time-base resonator” oscillates at a high-but-unspecified frequency. The resonator signal is sent to both a divider and to the digital adjustment block. Note that the temperature sensor is connected to the digital adjustment block (but, critically, not the divider that drives the 32 kHz output), which in turn outputs a temperature compensated 1 Hz signal to the timekeeping block (out of view above) and to the INT#/SQW output pin which can produce a 1 Hz signal.

The 32 kHz output of the DS3231M is not temperature compensated, while the 1 Hz INT#/SQW output is.

The 1 Hz signal has a specified stability of 5 ppm, but the 32 kHz signal can vary by up to +/- 2.5%. Yikes.

If you have a frequency counter or an oscilloscope with a stable reference, you can see the difference between the two chips by probing the 32 kHz output (with a proper pullup resistor) and then touching the chip with your finger to gently warm it. The DS3231’s output frequency will slowly change and, after up to 64 seconds, the chip will compensate and the frequency will stabilize again. The DS3231M’s output frequency will change dramatically since the temperature coefficient of a MEMS oscillator is significantly higher than that of a crystal, though the temperature coefficient of the MEMS oscillator is not specified in the datasheet.

I have no idea why they designed the chip this way, but it is what it is. It’d be nice if a later revision to the DS3231M offered a temperature compensated 32 kHz output.

This brings us to the second point.

2. The DS3231 can output one of several frequencies or an interrupt on the INT#/SQW pin, but the DS3231M can only output a 1 Hz signal (or an interrupt).

By writing to bits 3 (Rate Select 1, RS1) and 4 (RS2) and setting bit 2 (Interrupt Control, INTCN) to 0 in the control register (0x0E) of the DS3231, a square wave at either 1 Hz, 1.024 kHz, 4.096 kHz, or 8.192 kHz will be output on the INT#/SQW pin.

On the DS3231M, the RS1/RS2 bits are not used and have no effect. When INTCN is set to 0, the INT#/SQW pin outputs a 1 Hz signal. No other frequency options are available.

The INT#/SQW outputs of both chips are temperature compensated.

So far, these are the only two major differences I’ve found between the M and non-M variants. Are you aware of any more? Do you have any idea why Maxim would choose not to have the 32 kHz output of the DS3231M be temperature compensated? Why would they not allow for several user-selectable output frequencies and only allow the user to select the 1 Hz output? If you know, please comment!

A look inside the DS3231 real-time clock

Dallas Semiconductor, now owned by Maxim Integrated, is well known for making some excellent real-time clocks (RTCs). Take, for example, the DS1307: it’s simple, works with essentially any cheap 32,768 Hz watch crystal, is easily accessible over I2C, and is extremely power efficient (500nA current when running the oscillator on battery power).

As great as it is, the DS1307 has a major drawback: it relies on an external crystal and lacks any sort of temperature compensation. Thus, any change in temperature will cause the clock to drift. A 20ppm error in the frequency of the crystal adds up to about a minute of error per month. Not so great.

Fortunately, Maxim also offers the DS3231, which is advertised as an “Extremely Accurate I2C-Integrated RTC/TCXO/Crystal”. This chip has the 32kHz crystal integrated into the package itself and uses a built-in temperature sensor to periodically measure the temperature of the crystal and, by switching different internal capacitors in and out of the crystal circuit, can precisely adjust its frequency so it remains constant. It’s specified to keep time within 2ppm from 0°C to +40°C, and 3.5ppm from -40°C to +85°C, which means the clock would only drift 63 and 110 seconds per year, respectively. Very cool.

The one (very minor) downside is that it draws about twice the current, a bit less than 1 μA, than the DS1307. Still, a common 220mAh CR2032 battery could power the chip for at least a decade with no problem. Such a circuit would be mostly limited by the CR2032’s self-discharge rate anyway.

In my case, I wanted to use such RTCs on several of my Raspberry Pis that are not regularly (read: almost never) connected to the internet, and so cannot always get their time from NTP servers.

Some clever person designed a very simple board that fits on the Raspberry Pi’s pin headers for power, ground, and I2C and has the DS3231 chip, pull-up resistors for the I2C bus, and a decoupling capacitor. It even has pads for a backup battery (not included, but adding a battery holder and coin cell is straightforward). Chinese vendors on eBay sell the board for about $1.50, with free shipping. Perfect.

Here’s the board I’m using on one of my Pis, along with the backup battery and holder I added.

The RTC module installed on one of my Raspberry Pis.
The RTC module installed on one of my Raspberry Pis.

Considering that the DS3231 is not a cheap chip, costing ~$3.80 USD per chip in minimum quantities of 1000 from Digi-Key, it’s a bit surprising that complete board only costs $1.50 per board. Like Edward Mallon, I wondered if these were counterfeit chips that were pin and function compatible, QC rejects, or somehow otherwise illegitimate chips.

For science, I ordered a few extra boards and tested them over the last year, where “tested” means “set the time on the chips with a Pi that was NTP synchronized to a GPS timing receiver, disconnected them from the Pi, and left them on the shelf running on battery power for a year”. The chips would be in direct sunlight in the mornings, and the temperature in the room would range between about 15°C and 30°C throughout the year. Not extreme, but not precisely regulated either. I did not adjust the “aging register” in the chip to trim the oscillator before this test, and the register was set to its default value of “0”. After a year, the chip with the largest drift was only 16 seconds off, which is about 0.5 ppm. That’s well within spec, so I’m happy. If these chips were counterfeit, they were at least good counterfeits that worked as advertised.

However, I wanted to look closer so I sacrificed one of the chips for science. Thanks to my friend Jesse for reminding me that I can just snip off the legs of the chip rather than trying to de-solder it. That made things a lot easier.

Here’s the top of the package. It claims to be an SN model, which means it is specced for the full -40°C to +85°C temperature range. The date code says it was made in week 33 of 2011, as part of lot 917AC. The # mark means it’s RoHS compliant.

The laser markings seemed a bit dodgy and not like the normal high-quality laser markings I see on other Maxim chips. I contacted Maxim, explained the situation, and sent photos of the package and die (see below). After checking their records, they say the style of the markings, the date code, and lot number are all consistent with that particular lot made in 2011, which strongly suggests the chips are legitimate. They also reminded me that they do not warrant or guarantee any products purchased from unauthorized resellers. Good to know, and not unexpected.

The exterior of the chip's package. It has the markings "DS3231SN" on the first line, "1133A4" on the second, "917AC" on the third, and a "#" sign on the fourth.
The exterior of the chip’s package.

I zoomed in with my USB microscope to examine the markings in more detail. It’s a bit hard to see in this close-up, but you should be able to see the digits “31”. Compare these markings to those on the Maxim MAX3232 chip I investigated earlier and you can see why I was a bit skeptical as to their legitimacy at first. Obviously, Maxim must have different types of laser marking equipment on their different production lines.

A close-up of the chip's markings. The digits "31" are barely visible.
A close-up of the chip’s markings.

I normally would digest the epoxy packaging of the chip in acid at work, butI was at home that day and didn’t have access to the chemicals and safety equipment I have in the lab at work, plus I didn’t want to dissolve the integrated crystal and its metal can. Instead, I embrittled the packaging by heating it in the flame of a common Bic lighter for several seconds and then quenching it in a glass of cool water. I repeated this process several times.

Next, I sanded down the back of the ship (assuming that the interesting parts of the die would face upwards, which they were — if they hadn’t been on the top, I’d sacrifice another chip and sand the top down) with fine sandpaper until I hit metal.

It turns out I was a bit too vigorous in my sanding, and accidentally sanded through the crystal’s metal housing and broke one of the forks of the tuning fork oscillating element. Oops.

In the photos below, the notch on the chip is to the left, so pin 1 is to the top left. The main die is behind the large copper pad to the left. The fuzzy “hair” at the bottom are strands of the epoxy package that I didn’t clean up.

The underside of the chip's interior. Several large copper pads are visible, as is the internal 32,768 Hz crystal which has been cross-sectioned due to excessive sanding.
The underside of the chip’s interior.
A close-up of the base of the crystal, showing its connections and the broken fork.
The interior of the hermetically sealed metal can containing the crystal. It has been cross-sectioned by excessive sanding and one of the tines of the crystal tuning fork has been broken. The solder joints connecting the crystal to the exterior of the can are visible, as is some metal shavings from the sanding process.
A detailed look at the tuning fork crystal.
A close-up of the crystal tuning fork. One of the tines has been broken off.
A close-up of the crystal tuning fork.
A close-up of the base of the tuning fork, showing the electrical connections and solder joints.
The base of the tuning fork.

This was interesting, but even after Maxim said the packing and exterior markings looked legitimate, I was curious if the die itself was an actual Dallas/Maxim die or if it was a fake. Using tweezers and a fine, sharp knife I was able to crumble away more of the epoxy package and remove the die. Unfortunately, the bond wires were still embedded in the package and so broke off when I removed the die. I also slightly scratched part of the die and cracked off part of the top-right corner. Clearly, acid digestion is the way to go.

Here’s the first look at the die itself. I had washed it with isopropanol and both the chip and the microscope slide are a bit wet. The die measures ~3.6 x 2.3 mm, and the images below were taken with my USB microscope.

The whole DS3231 die. The die is still wet with isopropanol and bits of the epoxy package are still stuck to the outer edges of the die. Scratches appear on the several parts of the complex circuit laid out on the die.
The DS3231 die.

First, I wanted to check to see if the die was actually made by Maxim or if it was a fake. The die clearly says “DALLAS SEMICONDUCTOR”, as well as “©2004 (M) MAXIM”. Looks legit. That’s refreshing.

The manufacturer's markings on the die. The words "DALLAS SEMICONDUCTOR" are written in all-caps vertically, while the copyright sign, the year 2004, an "M" in a circle (for Maxim?), and the word "MAXIM" is written horizontally off to the side. There are additional markings that appear to say "14A3" and "16A3", but are slightly blurry due to the very small size of the markings and the limitations of the microscope.
The manufacturer’s markings on the die.

Here’s some more photos of the die.

Another photo of the die showing the complex circuitry it contains.
Another photo of the die.
A different region of the chip showing different types of circuitry. The marking "DS3231" is clearly visible in the bottom-center, though there is an unknown marking between the "32" and "32".
The marking “DS32B31” is clearly visible in the bottom-center, though the “B” in the center of the text is hard to make out in this image. It’s unclear what the “B” means.
A different region of the chip, to the left of the "DS3231" marking mentioned earlier. Two bonding pads are visible, one with a bit of the gold bond wire remaining attached, while the other has had the wire removed.
Another region of the chip. Two bonding pads are clearly visible.
A photo showing the whole die. Ten bonding pads are visible, several still with the remains of gold wires still attached. A small area in the upper-right of the die has been snapped off while removing the chip from the packaging, and small pieces of packaging are still stuck to the outer edges of the die. Several distinctly different regions of the chip are visible, clearly with different functions.
An overview of the whole die. Note a small bit broken off in the top-right and some of the packaging still stuck to it.

In addition to my cheap USB microscope at home, I was later able to take the die into the lab at work and use the (very expensive) Zeiss microscope to take more pictures. I was also able to clean it more thoroughly using the ultrasonic cleaner so the images came out considerably better.

Alas, compatibility issues between the camera mounted on the microscope and my computer prevented me from using the camera to get high-quality photos at this time. I’ve ordered an adapter so I can get better photos, but it will be several weeks. At that time I will either update this post or link to a new one. I plan on creating large composite images of the die at various levels of zoom, and with different optical filters. In the interim, here are a few photos I took using my smartphone aimed through the eyepiece of the lab microscope. They are nowhere near as clear or stunning in appearance as they are when viewed directly through the eyepiece or via the on-scope camera.

A close-up of the die's manufacturer markings prior to ultrasonic cleaning. The text "(C) 2004 (M) MAXIM" is visible, as are several large pieces of dust.
A close-up of the die’s manufacturer markings prior to ultrasonic cleaning. Large specks of dust are visible.
A close-up of the DS32B31 marking in the bottom-center of the die. It's unclear what the "B" stands for. Several large pieces of dust are visible.
A close-up of the DS32B31 marking in the bottom-center of the die prior to ultrasonic cleaning. It’s unclear what the “B” stands for.
A picture of the die using reflected differential interference contrast (DIC). The colors appear different (mostly blue and gold) but the contrast between the elements of the chip are greatly enhanced.
A picture of the die using reflected differential interference contrast (DIC) after ultrasonic cleaning. The colors appear different, but the contrast between the elements of the chip are greatly enhanced.

Addendum 2017-07-29: I’ve been able to get the camera on the microscope to cooperate and have gotten several high-quality photos. As the microscope has an extremely short depth of focus, particularly at high magnification, some images have been “focus stacked” by combining several images at different focus depths. Similarly, the large composite images are made from several individual images that may be focused slightly differently from each other. These processes may cause visual artifacts to be present.

In general, images with green and red colored layers use standard reflected microscopy with no filters, while images with blue and gold layers use reflected differential interference contrast (DIC).

A high-quality image of the Maxim logo on the chip.
A high-quality image of the Maxim logo on the chip.
A DIC image showing the "Maxim" brand markings.
A DIC image showing the “Maxim” brand markings.
Several markings (A1, 02A3, 08A1, 14A3, 15A3, 16A3, and 17A3) are made of different material and differ in color.
These look like they’re identifying different layers of material laid on the silicon: each section of text requires a different depth of focus and appears to be made of different material.
A DIC image showing markings (A1, 02A3, 08A1, 14A3, 15A3, 16A3, and 17A3) are made of different material and differ in color.
The same image as before, this time imaged using DIC.
The edge of the chip with many human-readable markings (e.g. Dallas Semiconductor, Maxim, etc.).
Most of the human-readable markings on the chip are found here, as are a bunch of scratches the die suffered when I removed it from the package.
A high-resolution composite image of the entire die.
A high-resolution composite image of the entire die.
A high-resolution composite DIC image showing the entire die.
A high-resolution composite DIC image showing the entire die.
A focus stacked image showing several circle-in-square circuit elements of two sizes.
A focus stacked image showing some interesting circuitry in the top-left corner of the die. Based on a discussion on the time-nuts mailing list, it appears likely that these are the capacitor arrays used for adjusting the frequency of the crystal.
A focus stacked DIC image showing several circle-in-square circuit elements of two sizes.
A focus stacked DIC image showing some interesting circuitry in the top-left corner of the die. Based on a discussion on the time-nuts mailing list, it appears likely that these are the capacitor arrays used for adjusting the frequency of the crystal.

That’s all the photos for now. I hope you found this as interesting as I did.

Well, that was an interesting failure mode…

I have a bunch of eBay-sourced DC-DC converters that I use for a bunch of purposes around the house. Most are ordinary “LM2596” (in scare quotes, as most seem to be clones: they’re marked as LM2596 and generally work well, but have different switching frequencies. Supposedly this is an issue with such things.) buck converters configured as adjustable, constant voltage power supplies where the output voltage is set by a multi-turn potentiometer. Very handy.

Others can be used in either constant voltage mode or constant current mode. For the latter, a serpentine strip of PCB trace acts as a low-value sense resistor. An LM358 dual op-amp integrates the difference between the voltage across the trace and a voltage set by a potentiometer, with the output connected to the regulator’s feedback pin via an LED so you can tell when the regulator is in constant current mode. Another potentiometer sets when the “charging” LED lights up; this is purely cosmetic, and the LED turns off when the current through the regulator drops below the setpoint set by the potentiometer.

Caleb Engineering has an excellent teardown of such a regulator here.

Here’s a few pictures of mine:

The overall regulator module. The input is on the left and the output on the right. The three potentiometers control, from left-to-right, the constant-voltage setpoint, the “charging” LED setpoint, and the constant current setpoint. Sorry for the bad lighting, but you can see the main regulator chip on the left, the op-amp on the right, and the linear regulator supplying the op-amp in the center.
The serpentine PCB trace used as a sense resistor. I typically measure around a 9mV drop from the “OUT-” pin in the top-right (to which the load’s 0V/ground is connected) and a test point partially seen at the extreme left of the picture when a current of 500mA is flowing.

Today, I wanted to use one of these modules to charge some supercapacitors in a controlled way, so I grabbed one of the buck modules, set the voltage limit to 2.6V (to stay within the 2.7V maximum limit of the supercapacitor) and the current limit to 500 mA. For testing, I connected the input to a 12V supply and everything worked fine.

I then connected the input to a 5V supply, which is more convenient for most things I do, only to watch the regulator go into current-limiting mode and push out 3.5A (!!). The current limiting potentiometer did nothing, even when turned all the way down to zero. The capacitors and the LM2596 started getting toasty warm (uh-oh), so I unplugged things to investigate.

It turns out I forgot a crucial detail: the op-amp is powered by a 78L05 5V linear regulator connected to the input voltage. Although the LM2596 switching regulator used to power the load has a dropout voltage of less than a volt (and the 2.4V difference between the 5V input and 2.6V output is perfectly suitable in any operating condition), the 78L05 regulator for the op-amp requires at least 7V input to stay in regulation. Supplying it with only 5V input meant the output voltage was less than the regulator needed, and so the feedback loop was broken and the LM2596 tried its hardest to pull the voltage up to 2.6V, maxing out its output current.

The culprit.

As soon as I connected the input of the module to a 9V or 12V supply, it worked great, since the 78L05 had a sufficient voltage difference to stay in regulation.

It’s worth being aware of this issue, particularly if your input power supply doesn’t have a lot of “oomph” behind it: if the input voltage ever drops below 7V (such as when supplying a heavy load) the 78L05 will drop out of regulation and the LM2596 will draw even more current, thus holding down the input voltage and preventing the system from recovering. Fuses are your friend in such conditions.

To prevent such issues, you might consider using some of the buck-boost modules (which are also available in constant voltage only, or CV/CC variants). They use a boost converter to first step up the voltage to a higher voltage (I have several different ones, some with LM2575 boost converters, while others have XL6009 chips, both boost to around 28V), which the LM2596 then bucks down to the desired output voltage. The 78L05 can handle input voltages up to 30V and the op-amp currents are low, so it works fine. There’s some loss of efficiency when using two converters instead of one and the maximum output voltage is slightly lower, but I haven’t found any edge conditions in the buck-boost configuration that cause bizarre failures like with the buck-only converters — one such buck-boost constant current supply has been driving the IR LEDs in my DIY babycam for more than a year from a 5V input without any hitches.

Edit: Although the constant current buck-boost modules commonly found on eBay will work fine with lower input voltages because the linear regulator gets its input from the boosted voltage from the first stage, it seems they cannot start up properly if they’re connected to a dead short when they’re first connected to input power. The switching regulators go into current-limiting mode and the linear regulator doesn’t get enough voltage to properly start the op-amp for constant current mode. I blew a bunch of fuses testing this (better than blowing up components!). Once the switching regulators have started up and the linear regulator is in regulation, the constant current regulation works as expected.

This issue could have been avoided by adding two resistors and a small capacitor to the ON/OFF pin on the LM2596 buck regulator for a delayed startup. This would keep the buck regulator offline for enough time that the boost and linear regulator, as well as the op-amp, would start up and be ready. Alas, due to the layout of the boards from the eBay suppliers, modifying the existing board isn’t really feasible.

In short: the buck-boost regulators with constant current regulation are better in general since they have fewer failure modes once they’re running, but the output current needs to be limited for a few moments when they’re first connected while the constant current regulation circuitry comes online otherwise they just max out their current. Not what you want. Adding some inrush limiting circuitry (e.g. an NTC thermistor and, optionally, a bypass MOSFET for higher efficiency) would work great.

Here’s an example of the buck-boost boards that I like, even with the above-mentioned limitation.

My Daughter’s First Circuit

My daughter turns three in June. Yesterday, we were playing and an idea popped into my mind: she likes to help me build various electronic things at my desk, but she’s never really built anything of her own. I asked if she wanted to make something with me and she energetically agreed.

Here it is:

It’s a simple two-transistor astable multivibrator that alternates between the red and green LEDs at around 2Hz. Everything to the right of the red wire is pretty bog-standard: 5% tolerance 470 ohm current-limiting resistors for the LEDs and 100k ohm resistors for charging the 10uF capacitors. Two BC548 transistors do the switching. Some 24 AWG wire connects parts too far apart (or awkwardly placed) for component leads to reach.

In retrospect, I could have laid things out better, but she didn’t mind. The only major thing I’d change is using ceramic capacitors instead of electrolytic, as I’d like to keep this circuit around until she’s older and have it still work without the capacitors drying out, but I didn’t have any 10uF ceramics at hand. I’ll order some, have her pick them out, and swap them out.

On the left is a simple terminal block for connecting a power supply. I wanted the circuit to be robust in terms of polarity, so I used a bridge rectifier so it can operate regardless of how the DC power supply is connected (I could have added a filter cap so AC could be used too, but I don’t have any wall warts with AC out, and she likes batteries, so this was not a major design consideration). I could have used a cheap diode, but the bridge rectifier uses Schottky diodes and so drops only 0.6V compared to a 1N400x’s 0.7V, plus it means the circuit will work (rather than simply not be destroyed) regardless of how it’s connected, so that was an easy and robust choice.

A 50mA polyfuse provides protection from faults (important when using old cellphone Li-Ion batteries as a power source). All the exposed underside contacts of the unfused section (i.e. terminal blocks and rectifier) are liberally coated with hot glue for insulation, with the jumper wires on the top and bottom tacked down with hot glue as well. All solder and components are lead-free, with burrs and other sharp points on connections filed smooth for minimal danger.

My daughter loved picking the components out of the parts drawers, listened attentively while I explained what they did and how they work, and helped me put them in the correct places on the breadboard. After things worked and she (later) went to bed, I moved the same parts over to a protoboard for a bit more durability. Now she’s running around the house waving it (and the 1000mAh cellphone battery stuck to the bottom with double-sided tape) around, blinking it at her baby brother, and integrated it into playing with her other toys.

This makes me happy.

Looking at a TP4056 Li-Ion charger with a FLIR ONE thermal camera

I recently acquired a FLIR ONE thermal camera, which deserves a separate post reviewing it, but for now let’s look at the TP4056 Li-Ion charger with integrated protection circuitry.

This is a pretty bog-standard, dirt-cheap Li-Ion charger that works really well. It does what it says on the tin: CC/CV charging, with charging current adjustable by replacing a specific resistor, 5V MicroUSB input, and pads/holes to accept connections to the cell, the load, and the charging power source (if one doesn’t want to use the USB port). No complaints at all, and no surprises.I like that it has a battery protection circuit as well: the protection chip monitors the charging or discharging current and voltage, and protects the cell against overvoltage (e.g. from over-charging), undervoltage (e.g. from over-discharging), and over-current situations by switching off the MOSFET that connects the battery to the load and charging chip.The FET is arranged in a cool way such that, even if the over-discharge protection has tripped and the FET is open, you can trickle charge through the FET’s body diode at a very low rate in order to slowly charge the cell up without stressing it. Once it reaches the release voltage, the cell will charge at the normal speed.One of the main reasons I bought the FLIR ONE thermal camera is to observe various electronic devices I have and see how hot they get, where the heat is dissipated, etc. Since the TP4056 is a linear charger and produces a modest amount of heat while charging, I figured this would make a great first test. Here’s one of the images I snapped:

As you can see, the chip gets moderately toasty when charging at 1A, and I can’t hold my finger on it for a more than a second or two. This is a top view with the chip and other components visible to the camera. The TP4056 also has a thermal “radiator” (using the language in the datasheet) pad on the bottom that should be connected to a copper plane on the PCB. The board has a bunch of thermal vias under the chip to conduct the heat away to the other side and the backside of the board is about the same temperature as the front. Neat.

I foresee a lot of fun (and useful projects) with both the camera and the battery charger.

Note to self: HC-05 bluetooth-to-serial modules need a pull-up resistor on the TX pin

I ran into some trouble today getting an HC-05 bluetooth-to-serial module to communicate with my Trimble Resolution T GPS receiver.

The ResT will send some data automatically once per second, but needs to be polled to send other data. Lacking the polling packet, weird things happen.

Some devices have built-in pull-up resistors so the module works fine, but the ResT doesn’t. The HC-05’s TX pin is open-drain, so without a pull-up it does nothing, causing confusion. Putting a >1k pull-up to 3.3v on that pin works wonders.

See https://mcuoneclipse.com/2014/03/30/getting-bluetooth-working-with-jy-mcu-bt_board-v1-06/ for more details.

Note to Self: Raspberry Pi & Motorola Oncore UT+ setup

This is the first of (hopefully) several “notes to self”. They are intended as a record of my various tinkerings and processes that I’ve learned. Although publicly readable, they’re meant as notes to myself in the context of my personal setup and are not really intended as complete “how-to” guides. If you find it useful, awesome! If not, sorry.

The version of NTPd packaged in Raspbian Jessie doesn’t have support for PPS (why?!) or the Motorola Oncore driver enabled. It needs to be recompiled to support those options. The Oncore hardware is quite old, so I understand them not wasting a bit of space by enabling the Oncore driver at compile-time (though really, disk space is cheap and abundant), but no PPS? C’mon.
Continue reading “Note to Self: Raspberry Pi & Motorola Oncore UT+ setup”

It’s the law: Certificate Authority websites must suck.

I’m pretty sure that it’s some sort of universal law that all Certificate Authority websites must be filled with obfuscating marketing-ese wording, links to “white papers”, contradictory and uninformative text, and content generally tailored for manager-types.

Honestly, I don’t know why they do this: TLS certificates are essentially always handled by technical staff — not management — at companies. Smaller organizations typically leave the administration of TLS certs to their commercial web hosts (again, technical staff). Individual site operators either know how to handle certs or don’t, but for those who don’t the marketing fluff on a CA website isn’t likely to help at all.

There may be some very specific reason why a particular CA is required, such as needing to support particular software or devices that only include a limited selection of roots, and while these reasons may be decided by managers and executives, the actual deployment is done by technical staff. The CA websites should really be tailored for technical people, not managers.

In addition to the typical manager-speak found on CA websites, the amount of confusing information is shocking. Some of it is merely misleading (e.g. implying that a particular certificate enables 128/256-bit symmetric ciphers rather than merely vouching for the identity of the server; the supported symmetric ciphers are set in the server configuration independently of the certificate and are negotiated with the client), while others are outright deceptive: Symantec/Thawte go so far as to claim that Server-Gated Cryptography is still relevant in this day and age (hint: it isn’t). In addition to being absurdly insecure and out of date, 16+ year old “export-grade” browsers that require SGC for strong cryptography are likely completely incapable of rendering modern websites in a comprehensible manner. Supporting such ancient browsers is a Bad Thing.

I’m also surprised at how hideous some of the CA websites appear: quite a few look like they haven’t been updated in at least a decade.

Lastly, there’s just way too many options presented by CAs. Domain-validated certificates are cheap and easy, though there’s no reason why phishing websites and the like can’t get perfectly-valid DV certs for their misleading or fraudulent sites: they do, after all, legitimately control their domain.

Still, DV certs provide reasonable protection from man-in-the-middle attacks, and CAs like Let’s Encrypt make DV certs available for free in an easily automated and installed way. If Let’s Encrypt’s ACME validation system won’t work for certain purposes, commercial CAs like Comodo and GeoTrust offer incredibly cheap DV certs in the form of PositiveSSL ($5/year) and RapidSSL ($9/year), respectively. Even Thawte offers relatively cheap “SSL123” DV certs for $31/year. There’s really no excuse for not using HTTPS.

Extended validation certs are useful for major companies, banks, etc. as the CA actually verifies the legitimacy of the entity behind the domain name. It should be extremely unlikely for any EV certificate to be issued illegitimately, though users might not actually check for anything more than the “green bar” (if they do that at all), so I generally think EV certs are a good idea.

That said, I’m not sure why there’s such an extreme price difference for EV certs. For example, compare Comodo ($101/year) and GeoTrust ($125/year) with Symantec ($600/year to $900/year) — the roots are equally ubiquitous and trusted, perform the same validation, and users never bother to check which CA actually issued a cert. So long as the green bar appears and the browser doesn’t yell at them, they don’t care.

Organizational and individually-validated certs are essentially worthless. They appear the same as DV certs in browser interfaces (no green bar), and essentially nobody bothers to check the O and OU fields in a certificate.

Charging more for wildcards is annoying, as it doesn’t cost the CA anything extra to issue; one of the reasons I liked StartSSL (before their WoSign-related drama) was that they only charged for things that required human action. Domain-validated certificates for non-commercial purposes are completely free of charge. OV and IV certs require a human to perform the validation, and customers pay an annual fee to be validated. Once validated, customers could issue an unlimited number of certificates — including wildcards — for any domains they controlled. EV certs were a bit different, but still quite cheap. That was a refreshing change from the business-as-usual of the CA industry, though StartSSL seem to have screwed themselves over with shady behavior after being acquired by WoSign.

Simply put, CA websites and their offerings suck. They’ve always sucked, currently suck, and likely will always suck in the future. I have no idea why such wildly-profitable organizations can’t design a website that doesn’t suck and is targeted to the relevant technical people.

Edit: It’s been brought to my attention that SSLs.com no longer offers GeoTrust, Thawte, Symantec certificates, and instead only offer Comodo certificates. I’ll keep the links here for historical purposes, but if you want to get such certificates you’ll need to find another vendor.

Mozilla preparing to remove WoSign/StartSSL from trusted root store

Last year, StartSSL, a popular Israeli certificate authority of which I myself have been a customer, was quietly purchased by WoSign, a CA in China. All well and good, such things happen fairly often in the industry.

However, they cut some corners: WoSign didn’t disclose the purchase to Mozilla, in violation of Mozilla’s policy. On its own, that’s not a super-critical issue, but that’s not all they did: based on information provided in a Mozilla report, WoSign has been caught backdating SHA1-signed certificates to avoid an industry-wide ban on that hash algorithm due to its cryptographic weakness, going so far as to provide a standardized internal framework for issuing backdated certificates. Additionally, they used the newly-acquired StartSSL to issue at least one backdated certificate.

Evidently they did this because Windows XP SP2 (a long-outdated version of XP, which is itself in end-of-life status) is quite popular in China and does not support SHA256 signatures, so there is a demand for SHA1-signed certificates. In addition, some payment processors in the US didn’t plan ahead and found some of their old payment terminals only supported SHA1 and were unprepared for the deadline and so got WoSign to backdate some new certificates to avoid any issues.

In addition, WoSign’s back-end software used for validating domains, issuing certificates, etc. has evidently had a series of bugs that have resulted in them improperly issuing certificates for GitHub and the University of Central Florida without the approval of either organization. A bug also allowed an attacker to bypass domain validation entirely and have WoSign issue certificates for unvalidated domains. While bugs are an unavoidable part of software development, such critical bugs should have been found very early in testing and never made it to production.

Their internal policies seemed geared toward “issue first, validate later and revoke if necessary”, which is absolutely the wrong way to issue certificates and which is in violation of the CA/Browser Forum Baseline Requirements.

Shockingly, WoSign’s auditor, Ernst & Young (Hong Kong), didn’t catch any of these glaring issues.

Needless to say, Mozilla isn’t happy and is discussing what to do. Right now, the most likely response is to untrust new WoSign and StartSSL-issued certificates for a period of at least one year, after which time they could reapply for trusted status by undergoing both the standard audits as well as some extra, Mozilla-specified scrutiny. Existing certificates from the CAs would continue to be recognized, but no new trusted certificates could be issued by those companies.

I find the solution to be quite elegant: CAs have occasionally played a bit fast and loose, and have relied upon their “too big to fail” status. Revoking the trust bits outright for a major CA like Symantec/VeriSign, Comodo, or even relatively smaller ones like StartSSL and WoSign, would cause a massive disruption for innocent customers of that CA and was generally only considered for the most extreme cases (see DigiNotar).

Instead, the solution proposed by Mozilla allows innocent customers to continue to use their certificates without disruption until they come time for renewal, at which point they’ll need to find some other option. The CA, however, is penalized by being unable to issue new certificates (if they do issue new certificates they’ll be untrusted, and Mozilla has threatened to blacklist the entire CA immediately if the backdates certs to avoid the restriction) and thus loses both reputation and business.

I suspect that Google, Microsoft, and Apple will follow Mozilla’s lead, so the penalty will be essentially universal.

Very cool.

Ars Technica has more details on the situation.

Personally, I’m saddened by the whole situation: other than a somewhat-clunky web interface, StartSSL had been a solid CA for years prior to their acquisition. The one black mark was their response to Heartbleed (they were charging for revoking compromised certificates) which, although in accordance with their policies, was a bit of a dick move and bad PR. I used StartSSL certs on many of my sites and had recommended them to others.

After the acquisition by WoSign (which had not been pointed out for nearly a year), StartSSL’s website switched to a poorly-translated version made in their China office (according to StartSSL). Although the speed of certificate issuance improved, the overall change was negative, with the web interface being laughably bad to use. The quality of customer service also decreased.

Still, StartSSL brought it on themselves. I no longer use StartSSL certs and don’t recommend anyone use them going forward. I may change my mind at some point in the future once they prove they’re trustworthy again, but not now.

Currently, I recommend using Let’s Encrypt, an open, automated and free CA — this site uses LE-issued certs. Installation and server configuration is automatic and easy, and renewals are handled automatically by cron job. It couldn’t be easier and I’m extremely happy.

For certain other, internal services I maintain that don’t play nice with Let’s Encrypt, I like Comodo PositiveSSL certificates sold by the reseller SSLs.com. Certs are cheap (around $5/year), issued in minutes, with a validity period up to 3 years. Unlimited reissues are included. Customer service is responsive and clueful. The one downside is their self-service interface only supports RSA certificates; if you want to use ECC certificates (Comodo PositiveSSL offers both all-RSA and all-ECC chains, which is nice) you’ll need to send the CSR to their customer service staff, who will manually submit it to the CA. They usually do this quite quickly.