Saturday, 16 November 2013

Technology focus - ECU Development - Measurement and Calibration


The ECU (Electronic Control Unit) used in development process is not the same as a production ECU. It is specially equipped with a suitable interface for calibration which you will never see on a production ECU!

There are a couple of technologies available to 'talk' to the ECU, one is a simple CAN based interface - with a calibration interface driver software installed (known as CCP - CAN Calibration Protocol) – in this case, the ECU needs extra memory (compared to standard) to facilitate the on line handling of the measurement labels. Another option is to equip the ECU with an 'emulator'. This device is installed inside the ECU and has direct read-write access to the data bus inside the micro controller. It also has additional memory and processing capability in order to directly handle communication with the PC running the calibration tool software. Generally speaking, the emulator has superior performance to the CAN solution, but is more complex and costly to implement.



Fig 1 – ECU emulators (also known as ETK) can be parallel (i.e. directly connected to the data bus) or serial (i.e. directly connected to the microcontroller) (source: ETAS)

Once you have the physical connection to the ECU, then you need to understand what is going on inside, and be able to make some sense of it - also, you need to be able to make changes to parameters and store, or apply them. To facilitate this, you need 2 pieces of information, the actual calibration of the ECU, which is stored in a HEX file and information about how to read the HEX file, this is the A2L file. 

The HEX file is a binary file, so you need some additional information (from the A2L file) to know which part of the memory inside the microcontroller, is used for what values, and how do we convert the 1s and 0s into something meaningful – a physical value – for example, engine speed. These A2L and HEX files are standardised, and are delivered with every development ECU to allow the Calibration Engineer to access and calibrate the ECU.

Fig 2 – Once you have access to the ECU, you need to understand what’s going on inside – the a2l and hex file provide this! (source: ETAS)

The task, measuring and calibrating
So now we know the hardware and software involved, what does the calibration task actually mean, and how is it done? With the above mentioned set up, we can access the ECU during run-time, and make changes, for example, changing ignition timing to give the best performance at any given engine operating condition (speed, load). The ignition timing is held in a ‘map’ – the map covers all engine operating conditions and provides an ignition timing demand value as a set point. Using the calibration tool, the map can be adjusted during testing, optimising to give the best performance. Note that a map is considered as a single value (or label) to be calibrated.

Fig 3 – Calibration is round trip Engineering, making adjustments, measuring, analysing and then adjusting again (source: ETAS)

The calibration Engineer, using the calibration tool, has access to all the labels during testing, so he can adjust the calibration labels, and see responses by monitoring the measurement labels. In addition, the engineer may want to measure some additional information. So it is often the case that the calibration tool is used in conjunction with some measurement hardware, so that physical values can be measured with additional sensors, for example – exhaust temperature may be measured using installed sensors on a development vehicle, in order to calibrate the exhaust temperature model inside the ECU, which is used on the production ECU for component protection.

Fig 4 – Screenshot of the calibration tool, showing maps and curves (calibration labels) that need to be adjusted and optimised (source: ATI)


Fig 5 – Overview of a typical vehicle measurement set-up for calibrating the ECU (Source: ETAS)


During the development cycle, the Calibration Engineer will adjust and change many values inside the ECU in order to optimise and characterise the engine. In a modern powertrain, this can take teams of people months, or even years to complete. Consider an ECU, with 30,000 labels, which will be fitted to 10 variants of vehicles. Each vehicle has a different calibration in order to differentiate it in the market. Each vehicle has to be calibrated with respect to emissions, performance, fuel consumption, drivability and on-board diagnostics – each one of these tasks is considerable, and they all impact on each other. It is very typical that calibration of a single ECU variant is managed by large teams of Engineers, often with specialist knowledge of how a function works, and how to calibrate it. For example, there may be a team of Engineers calibrating emissions, which will include a specialist person or team who can deal with the start/stop system, or the after-treatment system. This complex environment creates masses of data (calibration and measurement) that needs to be handled, analysed, controlled and merged, in order to create the final ‘master’ calibration that will be signed off by the Chief Calibration Engineer. This is the final version that will then be deployed on the production vehicle. This final calibration is normally ‘flashed’ into the ECU during the vehicle production cycle. Prior to the final vehicle test, in the factory before shipping of the vehicle.

The future for ECU development
It recognised in Industry that the calibration task and associated software development for controllers is becoming the majority task in the development of a modern vehicle or powertrain. This trend is unlikely to reverse is becoming impossible to manage efficiently with traditional technical approaches. To deal with this, new methods for the task are being developed, optimised and deployed. A popular approach is model-based engineering. This means reducing the amount of testing, by making some strategic measurements, then using a mathematical model to fit to the measurements, and provide accurate prediction in the areas where no measurement was made. For example, if we take a simple map, which is 8 by 8 is size, this means 8 x 8 data point = 64. So, in order to populate this map we would need 64 measurements! However, it may be possible to make 20 strategic measurements, then fit model to this data, then make 12 measurements to validate the model (=32) and this would reduce the number of measurements by half. The key here is to define the measurement strategy effectively to be able to fit a model accurately. This needs an approach called design-of-experiment (DOE).

Fig 6 – The more parameters to be adjusted, the more work to be done, with current systems, the complexity is such that with a traditional approach, it would take years to calibrate an ECU!

Another fast moving trend to accelerate the development of ECU is the concept of ‘frontloading’ – this means, moving specific aspects or tasks earlier in the cycle, where they can be performed in a lower cost environment. For example, if a vehicle manufacturer did all their development with prototype vehicles, then the cost would be massive as many, many vehicles would be needed. So, to save time and money, if some of the tasks can be done in a test facility, then this is generally cheaper because the facility can be adapted and re-used again and again. A good example here is the engine, or transmission, a large amount of development can be done on a test bed, with just final adjustments and refinements made in a vehicle test.

With current technology developments, this has moved forward a step - much development work can now be done on a PC, with a simulation environment – and this very applicable to ECU development work. ECU software and functions can be developed and tested easily in virtual environments. A full ECU, with a vehicle, driver and environment can be simulated on a PC, and the simulation can be run faster than real-time, this means a 20 minute test run can be reduced to a few seconds (depending on the complexity and PC processing power) – providing simulated results for analysis and development. A typical next step would be to have the ECU itself in a test environment – thus being able to test the actual ECU code, running on the ECU hardware, with physical connections to electrical stimulation, but a complete virtual test environment (driver, vehicle, environment). This approach is known as HiL testing – Hardware-in-the-Loop!


Fig 7 – Signal flows in a real systems, compared to HiL simulation (source:dSPACE)

Fig 8 - Typical development paths and tasks for ECU development (Source: ETAS)

There is no doubt that developing and calibrating an ECU is a complex task! Many tools and technologies are available to help, and many more will need to be developed to keep up with the demand for more sophistication!

Tuesday, 29 October 2013

Technology focus - Engine Downsizing and Downspeeding


You may have heard the term – Engine Downsizing. It’s a hot topic in the automotive world and many car manufacturers are hurriedly developing ‘downsized’ engines to meet current and future emission regulations. Some, like VW are already ahead of the game and have these engines in production to buy today. But what does this concept actually mean? What are the benefits to this approach? And what are the technical challenges? 


Improved fuel economy and reduced CO2 emissions are the major challenge faced by vehicle manufacturers developing future passenger car powertrains. Gasoline engine downsizing is the process whereby the speed / load operating point is moved to a more efficient operating region (at higher load) through the reduction of engine capacity, whilst maintaining the full load performance via pressure charging. Downsizing concepts based on turbocharged, direct injection engines are a very cost effective solution. The most significant technical challenges for such fuel efficient turbocharged GDI (Gasoline Direct Injection) are providing the required low end torque and in addition,  a suitable transient response to give the required levels of engine flexibility and drivability.




Fig 1 - What is downsizing?

In downsized engines, by applying a refined single stage charging concept, the full engine torque can be available as low as 1250 rpm. This can be combined with a specific power of 80kW/l. With the use of dual stage boosting, High torque, with a specific power > 140 kW/l is achievable. In conjunction with some other new technologies - exhaust cooling, cooled external EGR at high load - this results in a significant improvement of the real world fuel economy. In addition, efficient spray guided, stratified charge systems are utilised to gain further improvements. The overall goal is to create an engine with excellent high load performance and durability, and to operate the engine in this high-load region as much of the time as possible. The combination of GDI and turbo charging, implemented on a small displacement engine, is the good basis to combine high real world fuel economy with an acceptable performance - even under a stringent CO2-scenario.



Fig 2 - Downsized engine - torque/speed/fuel consumption - development over time

Why Downsize?
In the past gasoline engines were perceived as a very cost effective Powertrain solution. Emissions were not that important as long as three way catalyst technology was used to ‘mop up’ the exhaust. Fuel economy was not really the primary target. If you wanted good fuel economy, you’d buy a diesel! But, on the other hand, diesel engines needed considerable technological effort in order to meet emission legislation, and diesel engine developers were allowed to introduce rather costly technologies in order to meet these emission targets.  The key word was: “emission is a must; low fuel consumption is nice to have”. In current times, there’s a new direction, and that is CO2 reduction.  This discussion is significantly enhanced by a penalty tax for OEMs (Original Equipment Manufacturers) not meeting future CO2 limits. Now as CO2 is seen as a harmful emission, gasoline engine developers get a chance to invest in some fuel economy technology. With Gasoline engines in particular, downsizing/downspeeding concepts based on turbocharged GDI, seem to be in pole position for the race of the most accepted technology for reducing fuel consumption - whilst keeping the additional benefit of performance and drivability (when compared to a traditional diesel).




Fig 3 - The technical challenges to achieving a downsized engine concept can be considerable

Technology and Challenges of Downsizing
So what are the technical aspects of a downsized engine?  It’s a small engine that produces high power. So, it’s operating much closer to the thermal and physical limits of the materials used in construction of the major engine components. It needs to have the following attributes:
  • A well designed combustion system that allows high compression ratios to promote efficiency
  • It needs to have excellent low speed torque, as most of the engine power is produced via the torque – like a diesel engine
  • It needs to have good, transient response to give an appealing performance to fulfil driver expectations
  • Good fuel economy – reduced requirements for full load enrichment
  • Very robust and durable base engine design
In order to achieve the above engine profile, there are a number of technologies in development and use. They can be used in combination with each other, or in conjunction with other technologies for CO2 reduction (like mild-hybridisation and start/stop) - some of the technologies specifically involved in a 'downsized' engine package are:

  • Direct Gasoline Injection
  • Turbo and super charging
  • Cooled EGR
  • Active Exhaust cooling
  • Variable valve timing


Fig 4 - Cooled turbocharger housing - no need to run rich to control high exhaust temperatures, thus saving fuel (Source: AVL)


Downspeeding
This is another similar approach, often mentioned in the same context as downsizing. It involves moving the most frequently used engine operating point, to where it is more efficient (as downsizing does, but in this case, lower speed instead of higher load). At lower engine speed, a higher torque is needed to maintain the required power. The advantage of low speed operation is that friction losses are reduced (due to lower rubbing speeds between components). In addition, this concept provides real fuel savings as fuel consumption efficiency often increases with lower engine speed. The technical challenge is that high torque means high load on all engine components, this increases material costs to cope with these loads. Also, Down speed engines need to have a fast torque build up, in order to meet the requirements for transient response. This requires as a minimum pressure charging, and in addition, perhaps some other technical approach to be able to produce the required torque – for example, electrical assisted Powertrain, or electrical assist for the turbo/supercharger.

De-rating
Another less common but viable alternative to downsizing is de-rating, especially for diesel engines. De-rating means to limit the power output of a given engine design - that is, not going to the specific power extremes of the design (power density typically limited at ~ 45 kW/L). The advantage here is that this lessens the requirement for a sophisticated engine design with expensive high-end components, due to lower PFP (peak firing pressure). Of course, for such de-rating concepts, viability has to be investigated with respect to the expected production volume, costs, image, regional market aspects, etc. - these factors all have to be taken into consideration.



Fig 5 - Comparison of concepts downsizing vs. de-rating (Source: AVL)

De-rating also offers the potential of commonality between Gasoline & Diesel engine family production. With increased number of common parts with gasoline engines, this leads to increased production volumes and consequently lower cost.

Summary
It’s a fact that downsizing is the current way forward; you can see in the market that most manufacturers have, or are currently developing, engines with lower displacements and better CO2 figures - maintaining the same power density. That’s all fine – if we can squeeze more out of an engine, increase its efficiency whilst maintaining durability, then that’s a win-win all round.
Developments in material technology and engine design have facilitated this opportunity, but future powertrains will need more than just smaller displacements to achieve forthcoming emission regulations, without sacrificing the driving experience.

So, downsizing and down speeding will be adopted in conjunction with other technologies. The reason is the smaller engines produce less torque, even highly boosted smaller engines, and the market will not accept ‘sluggish’ vehicles in today’s modern traffic. As we move towards micro and mild hybrids, and start/stop technology, the electric motor, as a torque supporting element becomes even more viable! An electric motor can produces full torque at low speed, and for short time torque boosting, is an ideal option to fill the gap in future downsized powertrains.


Tuesday, 22 October 2013

Engine Combustion - Spark Ignition (Gasoline)


Engine combustion is a fascinating topic to gain an understanding of, particularly when comparing compression and spark ignition, the fuels used and their properties needed for each respective type. Looking at details, and the current trends in technology. It’s not hard to see the convergence and similarity between gasoline and diesel engine fuel systems and combustion. Let's look in detail first at spark ignition based combustion

Spark ignition
For spark ignition combustion, the mixture is prepared completely prior to combustion (outside of the combustion chamber) - that is, the fuel is introduced to the air - fully atomised - and in theory, this mixture is uniform in its distribution. So called ‘homogeneous’, the amount of fuel in proportion to air should be chemically correct. What this means is that there is enough air, containing oxygen, to fully oxidise or ‘burn’ all of the fuels volatile content. Sounds quite straight forward, but in practice, not that easy considering all the operating conditions that a vehicle engine has to encounter. This mixture is compressed in the cylinder as the cylinder volume decreases due to the piston rising towards top dead centre (TDC), the pressure increases, with reference to simple gas laws, the temperature of this mixture also increases, but not sufficiently to reach the ignition point of the fuel/air mixture.

So far then we have mixed fuel and air and compressed it. But there are many hurdles for the engine designer to overcome to be able to do this efficiently for a multi-cylinder engine. These days, fuel is injected into the air stream, near the inlet port, but remember the carburettor (or even a single point injector). The mixture preparation occurs away from the point of entry into the cylinder, thus, distributing the mixture evenly to each cylinder, with the same amount of fuel for each cylinder, for a given operating condition, was a real headache for the engine designer.



Fig 1 - Single a) and Multi-point b) injection system layout 

Why? - well, in order to get the best possible performance and efficiency out of an engine, the individual cylinder contributions must be as even as possible, with as little variation as possible. Even small variations have a dramatic effect on the overall engine performance, so even mixture distribution is key to this, and impossible to achieve fully with a centralised mixture preparation system like a single carburettor shared between cylinders.. In addition, the distance that the mixture travels in order to get to the cylinder has another effect, that is the possibility that the fuel and air may separate during transit - the fuel literally drops out of the moving air becoming liquid droplets again (instead of a finely atomised spray). This is known as wall wetting, and causes flat spots due to instantaneous weak mixtures being introduced to the cylinder, this effect is much worse at low temperature (hence the need for the choke in days gone by, to enrich the mixture when cold) and during transient operation, where the air accelerates faster than the fuel (hence the need for an accelerator pump, to richen the mixture during accelerations). These were some of the arguments for the move to port fuel injection to each cylinder, thus contributing to improving efficiency and reducing emissions.

Back to combustion - fuel and air is mixed and compressed, now we are ready to produce some work. In a gasoline engine, I am  sure we all know that an electrical spark or arc is used to start combustion. We mentioned before that the mixture temperature is raised, but not beyond its ignition point. The intense electrical arc produced by the spark plug at it’s electrodes creates a localised heating of the mixture, sufficient for the fuel elements to begin oxidising and combustion of the mixture starts with a concentric flame front growing outwards from the initial ignition kernel. Once this process is initiated, it perpetuates itself, there is more or less no control over it. We just have to hope that the mixture is prepared correctly to sustain this flame so that is consumes all of the mixture, burning it cleanly and completely. The technical term for this type of combustion is ‘pre-mixed’. The engine is ‘throttled’ to control the mass of mixture in the cylinder, and hence its power output. The throttle being a characteristic of the gasoline engine.

Fig 2 - Flame propagation in a gasoline engine via optical imaging system (source: AVL)

An important point to note is that the speed that this flame travels across the combustion chamber is important, the typical  flame speed, travelling through an air/fuel mixture, would be far too slow in a combustion engine (approximately 0.5 metres per second). So, we have to speed things up. The way this is done is via cylinder charge motion, or turbulence. The turbulence is generated via induction and compression processes in conjunction with the combustion chamber design and has the effect a breaking up the flame front, increasing its surface area, thus increasing the surface area of fuel mixture for oxidation. Assuming a normal combustion event, the flame front grows out to the periphery of the cylinder where it decays once all the mixture is burned.


Fig 3 - Charge motion speeds up the combustion process, in a gasoline engine this is generally known as 'tumble'

Of course the timing of this event is essential! Ideally, we want the cylinder pressure, forcing down on the piston, to occur at the correct time relative to the crank angle. Seems obvious - too soon and we may be trying to push against the rising piston, too late and the piston is already moving down the bore, hence the expansion of the gas won’t do any work and the energy will be wasted as excessive heat in the exhaust.. A simple analogy would be to imagine pushing someone on a swing - too soon and the effect is collision, too late and the effect is no force transmitted - well, it’s the same in the engine cylinder. What engine engineers do know, is that half of the total fuel energy should be released at around 8 to 10 degrees after top dead centre. This can be measured by cylinder pressure analysis during an engine test. Hence, with a new engine design, the appropriate ignition timing can be mapped via monitoring the cylinder pressure for energy release, as well as knock, in order to map the correct value for a given engine operating condition. In summary then, the key points to consider regarding the spark ignition engine:
  • The fuel/air mixture is prepared externally, and ignited via a timed spark
  • The engine power is controlled via throttling, which reduces efficiency, particularly at part-load
  • The compression ratio is limited by self-ignition of the fuel/air mixture
  • In operation, engine maximum torque is limited by abnormal combustion (knocking)
  • Cylinder to cylinder variation (due to fuel distribution problems, and other factors) reduces the efficiency of the engine and is significant in a spark ignition engine

Friday, 26 July 2013

Technology focus - Future developments in On-Board Diagnostics

The latest generation of OBD is a very sophisticated and capable system for detecting emission related problems with the engine and powertrain. But, it relies on the fact that it is necessary to get the driver of the vehicle to do something about any problem that occurs. 

With respect to this factor, OBD2/EOBD is no improvement over OBD1 - as there must be some enforcement capability. Currently under consideration are plans for OBD3, which would take OBD2 a step further by adding the possibility of remote data transfer. This would involve using remote transmitter/transponder technology similar to that which is already being used for automatic electronic toll collection systems. An OBD3 equipped vehicle would be able to report emissions problems directly back to a regulatory agency. The transmitter/transponder would communicate the vehicle VIN (Vehicle Identification Number) and any diagnostic codes that have been logged. The system could be set up to automatically report an emissions problem the instant the MIL light comes on, or alternatively, the system could respond to answer a query from a regulator to its current emissions performance status.

What makes this approach so attractive is its efficiency, with remote monitoring via the onboard telemetry, the need for periodic inspections could be eliminated because only those vehicles that reported problems would have to be tested. The regulatory authorities could focus their efforts on vehicles and owners who are actually causing a violation rather than just random testing. It is clear that with a system like this, much more efficient use of available regulatory enforcement resources could be implemented, with a consequential improvement in the quality of our air.

An inevitable change that could come with OBD3 would be even closer scrutiny of vehicle emissions. The misfire detection algorithms currently required by OBD2 only watches for misfires during driving conditions that occur during the prescribed driving cycles. It does not monitor misfires during other engine operating modes like full load for example. More sophisticated methods of misfire detection will become common place which can feedback other information to the ECU about the combustion process, for example, the maximum cylinder pressure, detonation events or cylinder work done/balancing. This adds another dimension to the engine control system allowing greater efficiency and more power from any given engine design just via more sophisticated ECU control strategy.

Future OBD system will undoubtedly incorporate new developments in sensor technology. Currently the evaluation is done via sensors monitoring emissions indirectly. Clearly an improvement would be the ability to measure exhaust gas composition directly via on-board measurement systems (OBM). This is more in keeping with emission regulation philosophy and would overcome the inherent weakness of current OBD systems, that is, they fail to detect a number of minor faults that do not individually activate the MIL, or cause excessive emissions but whose combined effect is to cause the production of excess emissions.

The main barrier is the lack of availability of suitably durable and sensitive sensors for CO, NOx and HC. Some progress has been made with respect to this and some vehicles are now being fitted with NOx sensors. Currently there does appear to be void between the laboratory based sensors used in research and reliable mass produced units that could form the basis of an OBM (On Board Monitoring) system.





Fig 1 - NOx sensors are now in use! (Source: NGK)

Another development for future consideration is the further implementation of OBD for diesel engines. As diesel engine technology becomes more sophisticated, so does the requirement for OBD. In addition, emission legislation is driving more sophisticated requirements for after treatment of exhaust gas. All of these subsystems are to be subjected to checking via the OBD system and present their own specific challenges. For example, the monitoring of exhaust after treatment systems (particulate filters and catalysts) in addition to more complex EGR and air management systems.





Fig 2 - Current monitoring requirements for diesel engines

Rate based monitoring will be more significant for future systems which allows in-use performance ratio information to be logged. It is a standardised method of measuring monitoring frequency and filters out the affect of short trips, infrequent journeys etc. as factors which could affect the OBD logging and reactions. It is an essential part of the evaluation where driving habits or patterns are not known and it ensures that monitors run efficiently in use and detect faults in a timely and appropriate manner. It is defined as…

Minimum frequency = N/D

Where:
N = Number of times a monitor has run
D = Number of times vehicle has been operated

A significant factor in the development of future system will be the implementation of the latest technologies with respect to hardware and software development. Model based development and calibration of system will dramatically reduce the testing time by reducing the number of test iterations required. This technique is quite common for developing engine specific calibrations for ECUs during the engine development phase.

Virtual Development of OBD
Hardware-in-loop (HIL) simulation plays a significant part in rapid development of any ECU hardware and embedded system. New hardware can be tested and validated under a number of simulated conditions and its performance verified before it even goes near any prototype vehicle. The following tasks can be performed with this technology:

Full automation of testing for OBD functionality
Testing parameter extremes
Testing of experimental designs
Regression testing of new designs of software and hardware
Automatic documentation of results



Fig 3 - HiL environment for OBD testing

However, even in a HiL environment, an expensive target platform is needed (i.e. a development ECU). These are normally expensive, and in a typical development environment, they are scarce. In line with the general Industry trend to 'frontload' - it is now possible to have a complete virtual ECU and environment for testing of ECU functions, including OBD, running on a normal PC with a real time environment. The advantage is that no hardware is needed, but more importantly, simulation procedures (drive or test cycles) can be executed faster than real time - this means a 20 minute real time test cycle, can be executed in a few seconds - this has a significant benefit in the rapid prototype phase. See more information about virtual ECU development here:

Sunday, 21 July 2013

Master your Multi Meter - A basic tutorial

A multi-meter is one of the most versatile pieces of kit in your workshop toolbox. The question is though, how do you use it productively, and what can it tell you? Well, consider this blog a beginners guide to finding your way round the most common features of a typical digital multi-meter, We’ll look at how to make some typical measurements and how to interpret the readings. So, read on and master your multi-meter.

Multi-meter basics – what is a multi-meter?
A simple enough question, but worth answering in a bit of detail. For electrical system measurement applications there are a number of different aspects of a system that you may want to measure or monitor. For example, to measure the voltage of an electrical system is often of interest as this effectively shows the electrical ‘pressure’ in the system that pushes the current around (to do the work – for example, heating a bulb filament to become white hot and emit light).
In addition, you may want to know the current itself, as current is effectively the amount of flow in the system – more flow, more work done. So the pressure and flow are linked, more pressure, more flow, more work done. So voltage as well as current measurement is often important.
But you can probably appreciate that a flow meter is a completely different measurement device and measuring principle than a pressure gauge – and it’s the same with electrical measurements – a voltmeter is a different type meter to an ammeter, or in fact, an ohm meter (measuring resistance to current flow in a part of the circuit). So you need different instruments for each – that is the beauty of a multi-meter! It’s a single meter, that is capable of measuring more than one thing in an electrical circuit – it is effectively more than a single meter, it’s several devices in one (hence the name multi). Typically, multi-meters will always be able to measure amps, voltage and ohms (resistance), but often they can incorporate other features that allow the user to do much more analysis and measurement of a circuit.
Originally, multi-meters were analogue meters, with a needle and dial (like a speedometer). However, these have been almost universally superseded by the digital multi-meter, also known as a DMM. Note that analogue meters are often known as AVO’s or AVO meters (Amps-Volts-Ohms). DMM’s are much more robust than an analogue meter as they don’t need a sensitive needle/dial, however, some users prefer to look at a dial as it is easier to process visually and the detect trends (that’s why digital speedometers haven’t really caught on).


Fig 1 - Analogue and Digital type multi-meters (Draper)

What can you measure with a multi-meter
Let’s concentrate on the basics – how do you connect a meter to the circuit to measure volts (pressure), amps (flow) and resistance (resistance to flow). As we mentioned, the measurement principle is different for each, and so is the way that the meter is connected in the circuit. Let us study each case:

Volts
To measure the voltage, you need to connect the meter ‘across’ the component or circuit section of interest, effectively in parallel. The meter then ‘sees’ the same pressure as the component and can measure and display the value. Note that the volt meter has a very high resistance; this ensures that no additional load is applied to the circuit by the meter, and hence the circuit itself is not disturbed by the meter whilst performing a reading. Take a look at the circuit diagram:

Fig 2 - Connection of the DMM for voltage measurement 

Note that most vehicle circuits will be earth return, hence one side of the voltmeter may often be connected to a convenient earth point when taking a reading. Another common, but perhaps underused technique when measuring voltage, that is particularly useful on vehicle circuits where voltage is low but current is high, is too measure the voltage ‘drop’ across part of the circuit. Particularly if a high resistance is suspected to be causing a problem – for example, across a switch or connector. Measuring the voltage drop highlights a resistance in a working, loaded circuit and can easily show up bad connections, the diagram below shows the meter connection – a typical value is that the voltage drop should be no more than 10% on any part of the wiring circuit to the component.

Fig 3 – Connection of the DMM for voltage drop measurement

Amps
To measure current, the meter has to be connect into the circuit – in series. Note that the meter is connected in circuit so it can measure the flow around the circuit when in operation. When using a multi-meter to measure current, the resistance of the meter circuit is very low. This prevents the meter from creating an additional circuit resistance and affecting the accuracy of the reading. The diagram below shows how the meter is connected:

Fig 4 – DMM connected for current measurement

The important thing to remember is that the meter itself will have a limit to the amount of current it can measure, most DMM are limited to 10 amps maximum. So, you must be careful not to overload the meter circuit, which is often protected by a fuse inside the meter.

Resistance
Apart from voltage and current, it’s often useful to be able to measure the resistance of a circuit, or a component– that is, the restriction that the circuit/component provides to the flow of current. According to Ohms law, the resistance (in ohms) multiplied by the current (in amps) equals volts. So by knowing the current and voltage across a component, you can calculate its resistance (resistance equals volts divided by amps). However, it’s not always convenient to measure both voltage and current in a circuit (as you need 2 meters), also, you may want to establish resistance without powering up the circuit (for example, measuring a component out of the circuit).

For this application, you can use an ohmmeter (ohms are the unit of resistance). An ohmmeter measures resistance by applying a small current through the component/circuit and establishing resistance by measuring voltage and current. The current is tiny, supplied by a dry cell battery inside the meter, so there is no anger of damage to the unit under test, also, it is removed from the circuit completely for the measurement so the circuit does not need to be activated. The circuit for measuring resistance is shown below and always involves complete removal of the component, or isolation of the circuit, in order to make a measurement.

Fig 5 – DMM connected across component for resistance measurement

Note that it often makes sense to check or calibrate the meter before the resistance measurement is made. To do this you connect the test leads of the meter together, and check that the reading is zero ohms. Also, note that the component or circuit must be completely isolated or you will get false readings

What else can you measure?
Now we have covered the basic measurements, it’s worth noting that most multi-meters have additional measurement modes or features, some of which can be quite useful and are worth understanding. Typical extra measurements that you may see, depending on the meter are:

Continuity test:
Very similar to a resistance test, often incorporating an audible signal to give a ‘go’ – ‘no go’ indication. Very useful for testing bulbs, fuses, switches etc. – components that are generally either open or closed circuit – this mode gives a quick indication if OK or not. The meter generally gives an audible signal (from a buzzer) if the resistance is below a certain value. Can also be used on wiring and connectors to detect open circuits, as long as the circuit is not live, and is isolated.

Diode test:
Another type of resistance check but specifically for diodes (which are an electrical one way valve). They are semiconductor junctions and need a minimum voltage applied across them before they will ‘switch on’ and conduct. Most multi-meters will not provide this minimum voltage in resistance test mode, as they tend to use very low voltages to prevent circuit or component damage. In diode test mode, a small current is supplied by the meter. This is used to test the diode in forward and reverse direction (by reversing the lead connections manually). For a healthy diode, one way should conduct, the other should block. When the diode conducts, the voltage drop across the diode is shown on the display, generally about 0.5 -0.7 volts depending on the diode type. If the diode is blocking, no current flows and the display shows zero. 

In addition, special features are often added to multi-meters aimed at specific applications, for example, Electronics laboratory test meters may include:

Temperature
An additional connector to allow temperature measurement via a thermocouple that is generally supplied with the meter. The tip of this is the sensing element and the temperature is displayed on the meter display – not that thermocouples are not incredibly accurate at approx. +/- 1 degrees centigrade

Transistor Test
This is an electronic component test feature, similar to diode test, but to test the gain factor of a transistor (i.e. the amplification ability of the transistor, also known as hfe). Unless you’re an electronics engineer or technician, you won’t need this.

Capacitor Test
As above, specifically for testing the capacitance value of a capacitor – again, unless you are into electronics, you won’t need this!

Multi-meters aimed at Automotive Diagnostics are also popular, these may include the following features:

Electronics Tacho
This mode allows measurement of engine speed via an inductive clamp generally supplied with the meter. The clamp picks up ignition pulses from the HT lead, the meter calculates the time difference between the pulses and converts this to engine speed. Sometimes the meter can be adjusted for 4 stroke or 2 stroke engines to give the correct reading, otherwise the reading has to be halved for a 2-stroke or wasted spark ignition system.

Dwell/Pulse width
This mode allows pulse evaluation, to be able to understand the width of a pulse, or the duty factor of a pulse (i.e. how long the pulse is active within a switching cycle). Typical automotive applications would be – points dwell measurement (how long the points are closed for - normally given as an angle in degrees or %); injector opening time (in milliseconds); idle speed control valve (% on/off of the driver circuit), there are also others…


Practical Measuring Tips
When using a multi-meters, bear in mind the following:
  • Note that most multi-meters will measure AC (Alternating Current) and DC (Direct Current). For vehicle applications, you will be measuring DC almost exclusively! 
  • Make sure that when you are measuring that the meter is set correctly before you connect to the test point. Select the correct mode (AC volts, DC volts etc.). 
  • Some meters are ‘auto ranging’ so they are able to automatically detect the correct range according to the input. However, with some meters you have to select the range manually – be careful when doing this, if you are not sure what range you need, start at the highest and work your way down the scale!
  • Make sure that you connect the test leads correctly for the mode you are in, there are normally several jack sockets, different ones for current and voltage. If you don’t get this right you will get no reading (best case) or worst case, you will damage the meter!
  • Most digital multi-meters will read a maximum of 10 amps current, and most are fitted with an internal fuse so that if they are overloaded the fuse blows before any damage occurs to the cables or the meter.

Typical Measurements
Let’s look at a typical measurement application - measuring voltage. Voltage is a measure of the system ‘pressure’, and you need this to push the current around. No volts no current flow! To measure voltage, simply connect the leads to the appropriate jack sockets 


Picture1 – Connecting leads into jack sockets

Use the dial to select the correct measurement range, for a 12 volt system it’s between 10 – 20 volts, for this meter we can select the 30 volt range, or, if the meter is capable of auto ranging, just select DC volts


Picture 2 – Selecting correct range on the meter

Now connect the leads, for negative earth cars, black lead to a good earth, red lead to the test point (positive earth cars, other way round)


Picture 3 – Connecting the leads

You’re now connected, so you can observe the reading, the display will show the voltage potential at the test point. The test point should show a voltage reading similar to battery volts.


Picture 4  – Meter reading battery volts from light switch supply

Before starting any voltage measurements, connect your meter across the vehicle battery, this shows that the meter is working, and that the battery is not flat! Note that in voltage range setting the resistance of the meter is ‘high’ so that it doesn't affect the circuit being tested by being a significant additional current path.

Summary
A digital multi-meter is a useful piece of kit, particularly one with the useful extras for automotive use. However, you don’t need to spend a fortune, all meters will read volts, ohms and amps and those are the basic functions you need for electrical system fault finding. The most important things to look for are a good quality, durable unit with a protective case or holder that will stand up to the working environment. Long leads are essential for use around the vehicle (look for leads of approx. 1 metre), in addition to a large clear display (display back lighting is also useful).

Sunday, 14 July 2013

Automotive Scope Basics - setting the time base correctly

Setting the correct sampling rate when using a scope is a decisive factor in getting a quality measurement. It enables you to get the information that you need from the measured signal, to ensure you are able to make an informed diagnostic judgement! This factor is particularly important with respect to the signal input channel and the target measurable. Let’s look closely at how you can get this right, and avoid under or over sampling.

First thing to understand is the reasoning for the appropriate sample rate, of course, you could sample every channel as fast as possible! But, you would end up with large data files, difficult to handle, with extra information (and possibly noise) on the signal that you don’t need. The opposite, if you don’t set the sample rate high enough, you will miss crucial components of the signal – this is known as aliasing! One of the first things to appreciate is the relationship between sampling time (distance between samples) and frequency, it is easy!

Frequency = 1/sample time

e.g sample time is 10 milliseconds => 1/0.01 => 100Hz

So, by measuring the cycle time, you can calculate the frequency, and vice versa – but what does this mean and how does this help? In signal processing technology, there are sampling theorems proposed (try Googling – Nyquist, Shannon) which state that the sampling frequency must be at least twice that of the highest frequency signal component of interest. This means that, if you can measure and establish the fundamantal highest frequency, then you can set the sample rate accordingly. There are some slight complications though, generally when we are using a Scope to measure automotive signals, the signals are transient in nature, normally related to engine speed. So, it is important to consider what the signal frequency will be at the highest frequency that may be achieved during a measurement – let’s look at an example. The diagram (Fig 1) shows a typical inductive CPS signal (Crank Position Sensor), in this case with 2 ‘gaps’ as reference points for the engine management system.

Fig 1 – CPS raw signal, 2 positions per revolution where there are missing teeth as reference points

In this case the cursors are measuring the time difference for 1 engine rev (about 30 milliseconds) which converts to 2000rpm (which is correct).

e.g. 1/0.03 => ~33.33Hz - then multiply by 60 to convert to min from seconds...

 =>33.33 x 60 => 1999 rpm (approx. 2000rpm)

If we look in a bit more detail at the signal, we can examine the highest frequency part (Fig 2)



Fig 2 – CPS signal, zoomed in, cursors measuring the time difference of the high frequency part

In this case, the display shows the frequency directly in the bottom right hand corner (I am using a Picoscope). So based on this (~ 4kHz), I know I have to set the sample rate at 8kHz to ensure that I don’t miss anything on the signal. As mentioned earlier, this measurement is at an engine speed of 2000rpm. So, I need to consider an upper limit, in this case, I could say it’s a gasoline engine, I am not likely to exceed 6000rpm in my measurement task, so I can set the sample rate accordingly at 3 x 8kHz or slightly greater (according to the time base setting steps you have on your measurement device). So now I know that I will sample with good digital resolution and conversion quality, without oversampling, right throughout my task. Let’s take a look at what happens when you under or over sample – This picture below (Fig 3) shows the effect of under sampling. In this screenshot it is not too extreme, the signal is sampled at ½ the correct value according to the sampling theorem. This means it is being sampled at its actual frequency – so you can see the basic shape of the wave, but you can also see the loss of details compared to the correctly sampled signal.

Fig 3 – Undersampling (brown trace at 8kHz – the minimum required, blue trace at 4 kHz)

In Fig 4 below, we can see the effect of oversampling on the signal, basically a lot of noise can be seen, which in this case, will add no value to the evaluation and could be misleading (In contradiction though - it’s worthy to note though that sometimes noise can be the root source of a problem, so, sometimes it may be necessary to sample at high frequency to capture this). 


Fig 4 – Oversampled signal (brown trace at 8kHz, blue trace at 1MHz)

What is not shown is the effect on the size of data files – large files are difficult to handle and store! Fig 5 shows the relative file size of different sample rate for the data shown in the screen shots, file size grows exponentially compared to sample rate! For measurements over extended periods, at high resolution, the files will be very large!

Fig 5 – Data file size compared to sample rate

Experimenting around a bit with this signal showed that a 100kHz sample rate was a good compromise! Generally sampling theorems suggest a minimum of twice the highest frequency component. In my experience, anything between 2 and 10 is fine, depending upon the application and the task. Fig 6 shows the signal at 100kHz. This is a sampling factor of 25 at 2000rpm and 8.33 at 6000rpm – which is fine for this CPS signal – the signal will be appropriately sampled, even at the maximum engine speed.


Fig 6 – Sampling at 100kHz, a good compromise in this case

Summary
Establishing the correct basic sampling frequency for a signal is good practice, it optimises the trade off between good quality measurement data and manageable files to work with. Of most importance though - it means the you have a good understanding of what you are doing, and that you know how to configure and use your scope effectively! It is well worth practicing this aspect of the set-up - measure some signals with deliberate over and under sampling, then compare them so you can see the difference in data quality, with a known signal type, when the scope is sampling correctly set, and when it is not!





Thursday, 4 July 2013

Oscilloscopes for Diagnostics - set-up fundamentals

Oscilloscopes were once the reserve of scientists and physicists, measuring complex signals in research labs, pushing the boundaries in the quest of extending human knowledge! but, as we all know, the cost of sophisticated technology reduces over time such that things that were once horrendously expensive are now practically throw away! 

The consequence of this is that oscilloscopes are now relatively affordable as a general measuring device, and a particularly useful for electrical system diagnostics - they're almost down to the price of a really good multimeter, but just because they are within the grasp of the average 'technician in the street', does that mean it's worth digging deep to get your mitts on one? and even if you did, would you be able get any real benefit out if it. 

The basics:
So, you've probably heard the term oscilloscope, but what are we actually talking about? Well, fundamentally, the 'scope' (as its very commonly known for short) is a voltmeter, however, the voltage displayed on a voltmeter is given as a reading, either via a needle on a scale, or as a discrete value. The reading given by a scope is via a curve, drawn on a display screen - so what? If the signal is displayed in this way, it means that fast moving changes can be displayed. Consider a signal in the form of a single voltage pulse, like a spike! If you looked at this signal on a digital multimeter, it probably wouldn't event register, looking via an analogue meter, you may, if you're lucky see the needle twitch. However, look at the signal on a properly set up scope and you will see much more of the detail, you'll see the profile of the signal shape, the rising and falling edges, you'll see the peak value, plus the duration of the pulse - so much, much more detail, you would miss all that with a meter. Of course for some signals, you don't need all that detail, generally, signals that a changing slowly over time, for example, a temperature signal, or a pressure sensor signal. These are fine viewed with a meter. However, signals which are dynamic in nature, in particular, those related to crank position, or fast changing sensor values, make much more sense when viewed on a scope. To give a specific example, think of  a crank position sensor signal. View this signal on a meter, and you'll just see a constant or slightly wavering voltage, but, measure this with a scope, and you will see all the details of the edges relating to crank position, and the missing tooth relating to a TDC reference mark. A signal such a this needs a scope measurement to really be able to check the signal quality during operation - using a meter, you would be hard pressed to diagnose any run-time faults. 

Scopes - Analogue, then Digital
Originally, scopes were used in labs, for analysing signal wave forms  they were analogue devices, similar in many ways to the old fashioned television, complete with a cathode ray tube. Basic operation was that an electron beam was fired at a cathode screen, drawing a straight line across the screen at regular intervals according to the scope set up. This line was projected across the screen and displayed (delayed) until it was redrawn - this process being cyclic, with the screen acting as a kind of very short term memory. The beam would be deflected by the applied signal to be measured, so, imagine a pulse - this would deflect the beam and the line drawn on the screen would show the pulse pattern over time as the line is drawn across the screen. The disadvantage of this type of device is that it is difficult to store or capture a waveform, the only possible method is to literally photograph the screen image - not ideal.

Fig 1 - A typical analogue lab scope - at one time, state of the art kit!

The digital age overcame this, by sampling a waveform and storing it digitally, with this technology measurements can be stored and further processing and analysis can be carried out by a signal processor, either during or after the measurement. Wave forms can be archived as files on a hard disc drive and then these digitally captured traces can be analysed in greater detail after the measurement. Digital scopes are now standard, and have the sampling speed and performance that now matches analogues scopes for all but a few, very specific applications. Certainly, for automotive use, the digital scope is standard, so, no further discussion on scope history from this point on! It is important though to understand the concept of how a digital scope captures the signal, as this creates a few considerations that you'll need to consider when setting up a scope for a measurement task - lets take a look at this in more detail...

Fig 2 - A typical 'scope' kit for Automotive Diagnostics (source: Pico Technology)

Scope settings for accurate sampling:
A digital sampling device (in our case a scope, but there are others in many different applications) uses something called an analogue-to-digital converter (often abbreviated to ADC) to convert the target wave form or signal to something that can be stored or manipulated by a digital signal processor or computer. The ADC samples the applied input signal at fast, regular intervals, each sample creates a measurement value, a number, that can be resolved into a binary value (1's and 0's) then stored in electronic memory. In this way, we end up with a string of numbers, in memory, which represent the waveform. However, there is a critical consideration here! In between sample points, the signal has not been measured, so, between 2 samples we simply draw a straight line (known as interpolation), but we have to be certain that we sample enough points to capture the full signal detail. if you don't sample quickly enough, some detail between could be missed completely (a phenomena known as aliasing) so an important consideration when setting up the scope is sampling frequency - are you sampling fast enough to capture the detail of the signal? Of course, you could just sample every signal as fast as your device will allow, but over sampling uses up valuable memory space, and adds no value because measurement files will be unnecessarily large to manipulate and store. So, this part of the set-up is a compromise that you will need to consider and get right!


Fig 3 - It is important to sample fast enough to capture the high frequency signal components

In addition, you need to tank about the vertical resolution - the scope has a certain input range ( say -10 to +10 volts). Within this range, there are a certain number of 'bits' (or input steps) available for the digitisation process, effectively this is the minimum signal change width that the scope can record between samples on the vertical axis. This is important for similar reason to the correct sample rate, you need to use as many available bits to digitise your signal, otherwise the conversion will be poor and detail will be lost - this signal will appear 'blocky' with steps instead of a nice smooth curve during dynamic changes of the signal.

Fig 4 - Quantisation error due to insufficient dynamic range of vertical axis

Summary
Once you have got your sampling and input range setting correct, relative to the signal range you are measuring, then you should be able to enjoy measuring good quality data, which is excellent information to assist with many diagnostic procedures. Remember to always have a good ground connection/reference on the input channel to avoid signal noise and cross talk. Make sure that you use the correct type of probe for the signal that you want to capture, also, make sure that you zero/calibrate the input channel before any real measurement - just to be sure that you'll capture what you want, with the correct amount of detail. A useful tip is to make sure that your scope is always on hand, primed and ready for use - not tucked away somwhere that it is an effort to be able to use it. Then, it is easy to use it to measure and store signals from known good components or systems. In this way, you can build up your own reference library of 'good' data, that you can use in diagnostic procedures. Over time you'll build up a 'big data' set of information, that you can use for comparison, when you are looking to locate a real fault - this can be an invaluable timesaver!