“The Tubbs fire consumed the collected archives of William Hewlett and David Packard, the tech pioneers who in 1938 formed an electronics company in a Palo Alto garage with $538 in cash.

More than 100 boxes of the two men’s writings, correspondence, speeches and other items were contained in one of two modular buildings that burned to the ground at the Fountaingrove headquarters of Keysight Technologies. Keysight, the world’s largest electronics measurement company, traces its roots to HP and acquired the archives in 2014 when its business was split from Agilent Technologies — itself an HP spinoff.” ]]>

This is a great question, and in essence, what you have suggested is just what we have chosen to do from 20th May 2019. But why didn’t we do it 1960? And why did we use electric current as a base unit rather than electric charge?

Thinking back to 1960

To understand this we need to travel back in time to 1960 and consider two linked aspects of the choices of base units that faced metrologists back then.

• Definition: The units needed to have a relatively succinct definition – which said unambiguously what we mean by – say one ampere or one coulomb. We also needed a…

• Realisation: There needed to be a way to realise that definition i.e. to create ‘one’ unit: one ampere or one coulomb.

The ampere definition goes like this:

The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to 2×10−7 newtons per metre of length.

This is a pretty arcane definition and I think it was crafted this way in order to allow scientists to make coils of wire and calculate the force between coils which could then be measured. In short, from this definition one could create (realise) an apparatus where one would know from the definition that one ampere of current was flowing by measuring the force between the coils. Apparatus of this type generally looked like a weighing balance and the force was created by the application of weights traceable to the kilogram.

How might the equivalent definition for the coulomb have been composed?

The coulomb is that amount of charge which….

It is not obvious what would come next. It might be something equivalent to the ampere definition along the lines of:

The coulomb is that quantity of electric charge such that, if placed on a capacitor of specified design would result in a force between the plates of the capacitor of ???? newtons.

There are several problems with this. The first is that one coulomb is a VERY large amount of electrical charge and experimentally it cannot be placed on anything. It is (almost) impossible to ever accumulate 1 coulomb of charge. Experiments which involve large accumulations of charge are generally not very precise and typically involve large lightning bolts!

So the use of the ampere rather as a base unit rather than the coulomb led to better realisations of a unit related to electric charge.

So what’s changed now?

In the latest change, we specify the value of the charge on the electron e (in coulombs), and state that the current is just given by the amount of electrical charge passing a point on the wire per second. If we can do this now, why not then?

I think it is because we now have ways of realising currents (and voltages and resistances) in terms of fundamental constants, e and h (the Planck constant). Two effects, unimaginable in 1960, enabled this:

• Predicted in 1962, the Josephson Effect occurs at junctions between superconducting wires. If we shine microwaves of known frequency on the junction, then a DC voltage appears across junction whose magnitude is exactly proportional to the frequency of the microwaves. The constant of proportionality is known as the Josephson constant (KJ) and has the value:

KJ = h/2e

i.e. it is specified as exact ratio of two natural constants. So this means we can generate arbitrary voltages if we know the frequency of the microwaves. This is now the basis of all high precision voltage measurements.

• Then in 1980 a second effect was discovered, the Quantum Hall Effect. In this effect the electrical resistance of a thin piece of semiconductor becomes quantised in integer multiples of a resistance RK (the von Klitzing constant) which has the exact value:

RK = h/e2 (ohms)

This means we can generate standard resistances which will never age or drift in value. This effect is now the basis of all high precision resistance measurements.

Since the Josephson Effect allows us to create known voltages and the Quantum Hall Effect allows us to create known resistances, we can use both effects to create known currents.

Back in 1960 neither the Josephson Effect nor the Quantum Hall Effect had been imagined. Instead electrical currents were defined in terms of the force between wires.

• Until 20th May 2019, the Planck constant h and the electronic charge e both need to be measured in terms of existing definitions of the SI units, so both have an experimental uncertainty.

• After 20th May 2019, the Planck constant h and the electronic charge e will form the basis of our system of measurement and they will have no experimental uncertainty associated with them. Instead there will be an uncertainty related to how well we can realise standard volts, amperes and ohms in terms of h and e. However we can expect that experimental techniques to reduce this uncertainty of realisation over the decades and centuries to come.

Lengthy but interesting post about forthcoming changes to the definition of four of the base units of the SI system. ]]>