P14254: Underwater Thermoelectric Power Generation

Engineering Analysis

Table of Contents


See Detailed Drawings for the design.

Heat Sink

The following link explains how heat sinking calculations were made to determine heat sinking requirements:

Heat Sinking Calculation Summary

Heat Spreader

ANSYS was used to determine whether Copper or Aluminum should be used to spread heat from the heat source to the thermoelectrics. Thermoelectrics operate best when they have a uniform temperature across them. Copper was chosen since it has a minimal (0.85 degrees Celsius) temperature gradient across the top of the thermoelectrics and is readily available in the size we need in the Thermoelectrics Lab.

Heat Spreader Analysis Walkthrough


We need to supply 563W to the thermoelectrics, so a 750W cartridge heater will be used in concert with a variac and power analyzer to provide the desired 563W.


The following documents explain the process of choosing our insulation:

Preliminary Insulation Calculations

Insulation Options

Final Primary Insulation Options

A thermal circuit was developed to predict the performance of the system with the insulation that we selected. /public/Design/HeatSink/ThermalCircuit.xlsx was used to calculate expected heat flows.


The clamping spreadsheet used for calculations can be found here:

Clamping Spreadsheet

The document explaining the spreadsheet can be found here:

Clamping Documentation


A worksheet(XLSM) was created to analyze thermoelectric modules using a standard thermal model. The analysis is explained here. The analysis was really three parts: first fit manufacturer data to the model, second analyze the performance of the thermoelectric under the constraints of the system design, and finally determine the optimal number and kind of thermoelectrics to use.


The analysis was confined to modules from Thermonamic Ltd. because most manufacturers did not provide sufficient data to satisfy the thermoelectric model used.

The analysis was further limited to modules which were 40mm on a side because the Sustainable Energy Lab has equipment to test the performance of modules that size, and in fact possesses several modules from Thermonamic Ltd.

The worksheet mentioned earlier indicated that an optimal solution would be to use 2 Thermonamic TEHP1-1264-0.8 modules with 450W input heat to generate 18W of power (4% conversion efficiency). Note that although 18W is less than our initial customer specifications, the efficiency is on par with the specifications because the input heat has been reduced from 500W to 450W. There are two drawbacks to this approach. The first drawback is that the electrical system would be required to always draw a minimum average current from the thermoelectrics in order to keep the thermoelectrics from overheating. The second disadvantage is that we do not know if the specs from the manufacturer are accurate. If the performance of the module is significantly worse than expected, we will not meet our engineering requirement for power output.

To solve both of these problems, we can use three modules and increase the heat rate. Three TEHP1-1264-0.8 with 563W input heat will produce 19.8W of power and if the TEHP1-1264-0.8 modules do not perform satisfactorily, we can fall back upon three TEP1-1264-1.5 modules which the Sustainable Energy Lab already owns and which have well known properties.


When we want to do our final testing of our generator, we need some sort of way to hold the generator in a uniform position. Various concepts were considered, including adding hooks to the heat sink and enclosure to hang the generator in the test tank as well as adding legs to the generator so it could support itself. Weighing all our options, we decided it would be best to add legs.

The center of mass of our generator was found, and a set of legs (consisting of square 6061 aluminum tubing) were designed to support the generator.


To waterproof the tether which connects the mechanical system to the electronics and data acquisition equipment, we had three options:
  1. Waterproof connectors
  2. Cable glands
  3. Conduit

The waterproof connectors were at first considered to be the best option, but connectors that work with thermocouple wire are prohibitively expensive. Cable glands are an attractive option because of their simplicity, but they would make it difficult to change the wiring configuration during the testing phase in case an issue crops up so we decided to use conduit.

The conduit we are using is flexible PVC with a compression fitting which mounts through the wall of the enclosure. The setup is very similar to plastic pipe, but it is cheaper than plastic pipe because the fitting is cheaper. In case the fitting is not completely watertight (which we will determine by testing) we should be able to use a sealant or glue to waterproof it.


Electrical System - Block Diagram

In the following links, the overall predicted electrical system efficiency was estimated. The analysis was performed for multiple designs including the utilization of organic and ceramic capacitors, and finally with an increased switching frequency.

Efficiency Plots

These plots show that overall, the inductors within the converter dissipate the most power. The graph to farthest to the right shows that with the latest design revision with increased switching frequency, the inductance required was greatly reduced - resulting in the most efficient design.

The values and calculations used to create the figure above are given the in document below. Current and voltage values used in the calculation were extracted from PSPICE simulation or derived from the maximum calculated current on the input and output sides of the converter. For non-passive components, the data sheets of each device were used to extract the nominal current draw rated at 5V supply voltage.

Calculation of the Electrical System Efficiency: Power Dissipation Calculation

DC-DC Converter

The main component involved in any max power point tracker is the DC-DC converter. By changing the operating point of the converter using some control scheme, the max power point of the system can be tracked. For the maximum overall efficiency of the electrical system, the inverting buck boost converter was selected. This was because it utilizes the fewest components as compared to other converters.

Inverting Buck Boost

public/Design/Electronics/DC-DC Converter/BuckBoost/BuckBoostSchematic.jpg

The drawbacks to this design involve the large capacitance needed to filter the output voltage as well as the inverting output which complicates the control signals.

Based on these drawbacks, the ZETA converter was selected. The ZETA converter utilizes two additional inductors and capacitors, but the overall device size and therefore parasitics are much lower. Also, the output voltage is positive with respect to the input - making the control circuitry easier to reference.

ZETA Converter

public/Design/Electronics/DC-DC Converter/ZETA/ZetaSchematic.jpg

After selecting the ZETA converter, a reference design TI ZETA Design was used to guide hand calculations. These calculations and analysis were performed using Matlab in the following document.

MATLAB Component Selection and Frequency Analysis

After calculating these first order values, a more detailed analysis was performed using Orcad PSPICE. The plots of pertinent data are shown below. Here, the constraining design parameter was the output ripple voltage. Protection circuitry within lithium ion batteries typically cut off charging to within 25mV of their maximum voltage. Therefore, we designed around this cutoff limit as our maximum allowable ripple voltage. The component sizes in the more accurate PSPICE model were adjusted until this ripple value was reached. From this, the minimum sized capacitor could be determined.

After optimal capacitor size had been determined, an analysis of the inductor size was performed and is covered in the following document:

Inductor Size Selection

With these parameters and the nominal switching frequency of the controller having been selected, the DC-DC converter had been fully designed.


One of our most important customer requirements was to harvest energy using thermoelectrics as efficiently as possible. Along with this, another requirement was to be able to charge a battery. To combine these two requirements is not a trivial task. Using the latest battery technology family, Lithium-Ion, typically involves following a standard charging cycle with two phases. The first being constant current and the second being constant voltage. With these charging curves being only linear with respect to current and then voltage respectively, the power utilized by this charging scheme is not constant and tends to drop exponentially within the constant voltage phase. As such, we desired to explore the effects of charging a lithium-ion battery using a constant input power.

A first order estimation of charging at non constant current is shown in the following plot:

public/Design/Electronics/Battery Simulation/ChargingMethodsAnalysis.jpg

The top two plots show a fast charging, constant current/constant voltage charging scheme. These plots were created using an approximation of real charging data. The bottom plot estimates the charging behavior given a non constant current charging scheme. As seen in the plot details, simply by changing the charging scheme, the overall unused power during a charge cycle can be greatly decreased.

The option of using super capacitors to supplement the power draw on TEG and to offer increased energy and power capacity was considered, but abandoned the idea. The idea was abandoned due to safety concerns and concerns on how it would interact with the battery in parallel.

We did extensive research into the reasons why the CC-CV is the most widely used charging method versus any other charge algorithm, and the conclusion was for three reasons. The first was to minimize the risk of the battery becoming overcharges by having the exponentially decaying current during the CV phase. Another reason is that it both reduces the wear on the battery cells, and it does not require complex implementation.

To predict a constant power charging method, we followed a lithium ion battery model detailed in "Accurate Electrical Battery Model Capable of Predicting Runtime and I–V Performance," authors Min Chen and Gabriel Rincon-Mora. We then created our own using Simulink and modeled a constant power charging cycle.

Lithium Ion Battery Circuit Model

In "Battery Management Solutions, MaxLife™ Technology: Extending Battery Service Life and Minimizing Charge Time" authors Yevgen Barsukov and Michael Vega, we looked at the chemical effects of battery charging. From this we determined constant power charging will cause excessive wear on the battery. An option to mitigate this is to have a battery bank with a large capacity in order to decrease the rate of charge. In this project a large capacity battery bank would lead to unreasonably long test times.

The Simulink model files are given below:

Lithium Ion Battery Model Rev. 3

In this revision, the lithium ion battery itself is modeled. The charging current must be defined by the user.

Max Power Charging Cycle Rev 2

This simulink model yields the battery voltage and current based on a constant power charging cycle with the constant power available as the user input.

The resulting plots are shown here:

public/Design/Electronics/Battery Simulation/Charge Cycle.jpg

From the model, the single cell prediction was extrapolated out for a "3.4" parallel cell battery, predicting an ~30 minute charge time.

Controller and Control System

To track the max power point of the thermoelectric system, a DC-DC converter is controlled using some sort of algorithm. For the fast response time, the typical "Perturb and Observe" Method was chosen. This involves measuring the output power and varying the duty cycle of the converter in the direction of increasing output power.

To measure the output power, a current sensing method was required. The current sensing analysis is shown in the following MATLAB publication:

Current Sense Analysis

From this analysis, we chose to select a 24 Bit ADC from Linear Technology since the resolution would greatly exceed that of an integrated 10 Bit ADC within a controller. Also concluded from this analysis is the selection of a 500 microOhm resistor which would yield the best tradeoff between power dissipation and overall resolution.

The ATtiny85 is our chosen microcontoller. We explored the options of using a TI MSP430 and and Atmel ARM controller, however the ATtiny85 has been used before monitor battery charging, and has Arduino open source libraries that will allow us not to reinvent the wheel. Another aspect that makes the ATtiny85 an attractive choice is that it only has the bare minimum of what we require, and since it doesn't have additional bells and whistles we will be conserving more power. On top of all that has been listed Zach has extensive use of Arduino based devices, and there are many forums available to use if problems arise.

In order to provide maximum power to the battery a Maximum Power Point Tracker is used. There essentially three input variables and one output variable. From the output of the TEM the voltage and current are the power input to the MPPT system. The voltage of the battery varies, but has a range set by the over-discharge/over-charge circuitry within the battery. Varying the output current of the DC-DC, by modifying the duty-cycle will match the output power to the input power. The duty cycle is determined by monitoring changes in the output current. This algorithm is shown here: MPPT Algorithm

The microcontroller psuedocode can be found here: Microcontroller Psuedocode