The Hydrokinetic Power Resource in a Tidal Estuary: The Kennebec River of the Central Maine Coast

As we look to the world for sources of renewable energy, every bit we can harness is important.  Unlike our current system of high volume power generators, the grid of the future increasingly seems to be composed of many small generators working together in harmony.  One such source we have the technology to take advantage of is tidal power.  The power level available from the kinetic energy of tidal flows, in areas with relatively mild tidal ranges along coast or estuaries, can still be significant.  Using a numerical circulation model, David A. Brooks calculates available power peaks in one such region, the central Maine coast.  In recent history, the implementation of tidal power plants has been blocked by issues such as: cost of construction and maintenance of the requisite dams, gates, fishways and locks, as well as concerns about impacts on fisheries, navigation and a host of other environmental issues.  However, with dwindling fossil fuel supplies and global warming concerns looming, and turbine technology improving, tidal power has rekindled society’s interest.  Also, tidal power, as opposed to other hydrological power sources, does not require large impoundments such as dams, resulting in reduced installation and operational costs and minimal environmental impacts.—Donald Hamnett
Brooks, David A., The hydrokinetic power resource in a tidal estuary: The Kennebec River of the central Maine coast. Renewable Energy 36 (2011) 1492e1501.

            Brooks examined the central Maine coast because the Gulf of Maine and the adjoining Bay of Fundy are known for resonant semi-diurnal tides.  The range of these tides exceeds 15 meters at the head of the bay, but the mean for the entire coast is a smaller 3 meters.  In the nearby confined parts of river estuaries, narrow interconnecting passages, and between nearshore islands the tidal currents can exceed 2 meters per second.  To simulate the tidal, riverine, and wind-driven circulation of the coast, Brooks used a three-dimensional hydrodynamic model known as MECCA, or Model for Estuarine and Coastal Circulation Assessment.  On a three-dimensional grid the fields of velocity, temperature, and salinity are calculated using conservation of mass and momentum equations.  The forcing is specified at ocean boundaries, inshore points, and the free surface by tides.  The bathymetric data was provided by the National Geophysical Data Center, along with the model coastline data.  Velocities above 2 meters/second were found at the mouth of the Kennebec River, Bluff Head, and a few other points.  This is the velocity at which the power produced from the tide’s kinetic energy is 3 kilowatts per square meter.
            The kinetic tidal power widely available in regions with moderate tidal ranges, and is dispersed throughout these areas.  This attribute contrasts tidal with traditional forms of energy, which is focuses high quantity dependable generation in a few small sites.  The centralized nature of traditional power requires extensive distribution grids, and is a security threat as the power for a wide area could be taken out by problems with just one power plant.  Though tidal power has this advantage over conventional generation, it would require electrical grids that can accept and blend multiple power pulses.  From the model’s calculations, hundreds of megawatts of peak power are associated with the central Maine coast’s tidal systems, some of which could be practically harnessed.  In fact, the most promising site had a maximum power density of 6.5 kilowatts per square meter, in the lower Kennebec estuary near Bluff Head.  Were this resource harnessed and connected to the grid using a 500 square meter sub-region of the channel, this could supply the energy needs of about 150 typical homes, consuming 1.5 megawatt hours a month.  Further study into the monthly vertical and horizontal structure of the tides would be required to fully grasp the power potential of these coastal regions.  Tidal power as a small power generation tool has the potential to be another piece of the smart grid of the future.

Optimization of tilt angle for solar panel: Case study for Madinah, Saudi Arabia

In the search for a feasible alternative to conventional energy production, solar power is a proposed solution with a lot of upside, as the sun delivers a massive amount of energy to the Earth daily.  Roadblocks to the implementation of solar and other renewable include cost, infrastructure, and efficiency.  Improving in any of these areas will serve to hasten the transition to solar power.  Efficiency can be further broken down into the categories of technology and logistics.  While extensive research is needed to develop new materials to improve on the modern components of solar panels, the logistics of giving the panels the best opportunity to harness the sun’s power are not as worrisome.  We currently have an extensive amount of data as to which locations receive the most light, and how to place panels to fully use it.  One method of improving solar efficiency is using tilted solar panels.  Conventional flat solar panels are less efficient, because the sun hits them at an angle, decreasing the potential surface area of the panels.  By tilting the panels to hit the sun’s rays head on, we are able to fully utilize a panel’s surface area.  The first tilted panels have been installed to get the most utility out of the sun based on annual averages, at an angle approximately equal to their latitude.  However, as Benghanem discovered, a month-to-month angular model can significantly increase the panels’ efficiency.—Donald Hamnett
Benghanem, M., 2010. Optimization of tilt angle for solar panel: case study for Madinah, Saudi Arabia. Elsevier, doi:10.1016/j.apenergy.2010.10.001.

The amount of solar energy incident on a panel is a complex calculation, and is variable based on the time scale used.  It is a function of local radiation climatology, the orientation and tilt of the collector surface, and ground reflection properties.  Orientation and tilt is the factor that we have control over.  Though modeling this situation is not new, the existing models make an assumption that yields incorrect values.  The current models assume that sky radiation is isotropically distributed at all times; in other words, it is uniformly distributed in all directions.  This is not the case on our planet, so Benghanem’s model arrives are a more sophisticated expression for solar energy potential.  He developed an estimate based on anisotropic modeling methods, using horizontal solar radiation data from meteorological databases.  The sky-diffuse radiation can be expressed as the ratio of the average daily diffuse radiation on a tilted surface, to that on a horizontal surface.  This relation is composed of values for the following variables: daily beam radiation incident on a horizontal surface, extra-terrestrial daily radiation incident on a horizontal surface, daily ground reflected radiation incident on an inclined surface, ratio of average daily beam radiation incident on an inclined surface to that on a horizontal surface, and the surface slope from the horizontal.  Other factors included in the overall analysis were tracking the sun’s movement throughout the day to keep the panels perpendicular to the sun’s radiation, the zenith angle, which requires use of solar time and hour angle as opposed to the human established time-zone times.  The cosine of the zenith angle can be calculated as a function of hour angle, latitude, and solar declination.
This analysis confirmed that the optimum tilt angle is indeed different for each month of the year, and that the yearly optimum tilt angle equals the latitude of the site.  At Madinah site, which was the example modeled, the winter months (December-February) required an angle of 37 degrees, the spring months (March-May) required an angle of 17 degrees, the summer months (June-August) required an angle of 12 degrees, and the autumn months (September-November) required an angle of 28 degrees.  Overall, using yearly averages as opposed to monthly averages lost about 8% of the energy that would have been produced using monthly averages for the site at Madinah.

Pricing Offshore Wind

Offshore wind power is an environmentally and economically beneficial energy source which significantly reduces harmful pollutants.  Though more expensive than land based wind power and conventional energy sources, it is one of the least costly renewable energy sources.  This large, clean resource is also the largest renewable energy source for many coastal states.  Based on these facts, it seems logical to invest in offshore wind power; however, its cost relative to conventional sources and fluctuating power output offer some obstacles to its implementation.  Regardless of these obstacles, the advantages of offshore wind power have spurred policymakers across the globe to prioritize offshore wind power, relative to other renewable sources, and begin to deploy it on a large scale.  Policy and investment decisions regarding this technology require an accurate cost analysis over time.  Levitt and his colleagues at Elsevier used two such cost models for their analysis.  The first, the Levelized Cost of Energy (LCOE), measures the total financial cost of produced energy without considering policy of financial structures.  The second measure, the Breakeven Price (BP), gives the minimum electricity sale price for financial viability given a particular policy, tax, and purchase contract structure.  These models can help decision-makers by giving the information needed to bring down costs and make offshore wind financially viable.—Donald Hamnett
Levitt, Andrew C., et al. 2011. Pricing offshore wind power. Elsevier. doi:10.1016/j.enpol.2011.07.044

            The LCOE is calculated based on two factors: energy production and costs from construction and operations.  For each year, the cost cash flows of the project’s operation, including construction, are compiled into nominal values.  The Net Present Value (NPV) of each year of the plants lifetime is assessed using the nominal values and nominal interest rates, thus not adjusting to inflation.  Next, the energy production for each year of the project is determined and each unit of energy is given a dollar value that is constant over the life of the project, in real terms.  Simply put, the LCOE is the NPV of costs (terms of currency) divided by NPV of energy produced (in terms of energy units).  LCOE and BP share four determining input factors, with BP encompassing an additional three.  The shared principal determinants are Capital Expenditure (CAPEX), the cost to buy and build the plant; Operating Expenditure (OPEX), ongoing costs to operate and maintain the plant; discount rate, the return on investment required to attract project investors; and net capacity factor, the fraction of power generated over the long-term dividend by nameplate power.  The further parameters covered in BP are tax and policy inputs as applicable to the scenario; price escalator, the price increases each year as determined by the power purchase contract; and financial structure (debt term, term of power purchase agreement, etc.).  BP is calculated similarly to LCOE, but with consideration of cost and benefits based on U.S. policies.
            In the analysis, LCOE and BP were calculated with consideration of four financial structures and three cost structures.  The financial structures used were Corporate, Tax Equity, Project Finance, and Government Ownership.  The three cost structures are First-Of-A-Kind (FOAK), a scenario under which the plant is the initial project in an underdeveloped market such as the United States; Global Average (GA), a scenario similar to Northern Europe in which the market is more mature; and Best Recent Value (BRV), the best case under the current European market and available technology.  Several patterns were learned in the analysis, of which the most pertinent to the aforementioned factors follow.  First, it was found that Project and Corporate structures yield similar results, while the Tax Equity structure prices are higher than the LCOE due to high required returns.  Second, BRV prices are between 2.1 and 3.8 times lower than FOAK due to different in many parameters.  The DOE loan guarantee program modestly positively impacts FOAK, but has no or an adverse effect on the other cost structures.  However, the program currently only applies to FOAK, so it is just as well.  Lastly, the BP is lowest for Government Ownership under the FOAK and GA structures.  This is due to the low cost of government buying, and the absence of high required returns for investors.  From the analysis, we have learned that the high current FOAK costs are not representative of the actual cost of offshore wind power.  To reduce costs to the GA, then BRV point, the policy must be to increase the industrialization and manufacture of offshore wind.  This policy, along with furthered research and development, has the potential to bring the costs of offshore wind below the BRV point.

Meta-Analysis of Estimates of Life Cycle Greenhouse Gas Emissions from Concentrating Solar Power

A life cycle assessment (LCA) is a method of predicting environmental impacts of renewable energy technologies.  The advantage of renewable energy plants is that they do not emit significant amounts of greenhouse gases (GHGs); however, over their lifespan they do impact the environment with GHG emissions.  One such renewable energy source is concentrating solar power (CSP).  Analysts have conducted LCAs of the three main CSP technologies: parabolic trough (trough), power tower (tower), and parabolic dish (dish).  The current LCAs in regard to CSP technology have relatively high variability caused by a range of factors that include, “the type of technology being investigated, scope of analysis, assumed performance characteristics, location, data source, and the impact assessment methodology used” (Heath and Burkhardt 2011).  Gavin A. Heath and John J. Burkhardt from the National Renewable Energy laboratory aimed to conduct a meta-analysis which reduced the variability in CSP LCAs through a method called harmonization.  Harmonization takes several already published reports and aims at establishing more consistent methods and assumptions between them.  As part of the larger LCA Harmonization Project of the United States’ National Renewable Energy Laboratory, this meta-analysis will be used clarify estimates of central tendency and inform future decision making in regard to CSP technology as a whole, though the estimates for specific plants will vary due to their deviation from these generic estimates.—Donald Hamnett  

Heath, Garvin A., Burkhardt, John J., 2011. Meta-Analysis of Estimates of Life Cycle Greenhouse Gas Emissions from Concentrating Solar Power. National Renewable Energy Laboratory.

The three major LC phases for typical CSP plants used in the study are upstream, operational, and downstream.  Upstream processes include extraction of raw materials, materials manufacturing, component manufacturing, site improvements, and plant assembly.  Operational processes include manufacture and transportation of replacement components, fuel consumption in maintenance/cleaning vehicles, on-site natural gas combustion, and electricity consumption from the regional power grid.  Downstream processes include plant disassembly and disposal or recycling of plant materials.  In searching the English literature of environmental impacts of CSP, Heath and Burkhardt found 125 references, 13 of which provided sufficient numerical analyses yielding 42 GHG estimates (19 trough, 17 tower, 6 dish).  This was trimmed to 36 for the first-level harmonization process because of how few dish estimates were given.  The second level harmonization process, requiring more complete input and assumption documentation, used five trough studies with five estimates.  The first “light” level of harmonization, at a gross level, adjusts emissions estimates proportionally by comparing them to consistent values for performance characteristics, creating a common system boundary.  The parameters chosen for this level are solar fraction, direct normal irradiance, lifetime, solar-to-electric efficiency, global warming potentials, auxiliary natural gas consumption, and auxiliary electrical consumption.  The input-intensive harmonization method used GWIs, measures of the mass of GHGs emitted from the production of common materials and from other activities.
            In the light analysis, the most effective harmonization parameter was solar fraction, independently reducing the interquartile range (IQR) for trough CSP emissions by 85%.  The factor that had the biggest impact on central tendency was auxiliary electrical consumption, independently increasing the median of trough emissions by 50%.  Cumulatively, the light harmonization parameters decreased the trough IRQ by 69% and increased the median by 76%.  For tower systems, the IQR and median were reduced by 26% and 34%, respectively.  The more intensive GWI harmonization for the five trough CSP estimates assessed further reduced the median by an additional 6% and increased the range by 5%.  When pooled with the 14 trough estimates not assessed in the GWI analysis, the IQR decreased by an additional 9%.  These data and GHG emission estimates for CSP technology provide decision makers with a more thorough and complete view on the factors involved in the integration of this technology.

Wireless Solar Water Splitting Using Silicon-Based Semiconductors and Earth-Abundant Catalysts

Solar photovoltaic cells (PV) provide a renewable method of both generating electric current and storing fuels as hydrogen gas. The existence of this technology offers the hope of replacing our increasingly scarce fossil fuels with a dependable, infinite alternative; however, the monetary cost of manufacturing these cells has blocked their implementation. Traditionally, the PV process required the use of “prohibitively expensive light-absorbing materials [e.g., (Al)GaAs and GaInP], and/or fuel-forming catalysts (e.g., Pt, RuO2, IrO2), and strongly acidic or basic reaction media, which are corrosive and expensive to manage over the large areas required for light harvesting” (Reece et. al 2011). The aim of Reece and his colleagues is to mimic the photosynthetic process with earth-abundant materials under neutral pH conditions. They were successful in achieving this goal for both wired and wireless solar photo-electrochemical systems (PEC), replicating leaf photosynthesis to fix hydrogen and oxygen gas. For the wired system they were able to obtain an efficiency of 4.7%, while the wireless system had an efficiency of 2.5%. The continual development of this technology offers greater potential for solar energy to replace traditional sources due not only to necessity, but due to market factors as well.—Donald Hamnett

Steven Y. Reece, et al.,Wireless solar water splitting using silicon-based semiconductors and earth-abundant catalysts. Science 334, 645.

The specific compounds Reece and his colleagues discovered that make this process work are: a cobalt oxygen evolving catalyst (Co-OEC), a nickel-molybdenum-zinc hydrogen evolving catalyst (NiMoZn), and a triple junction amorphous silicon (3jn-a-Si) interface between the catalysts which is coated with indium tin oxide (ITO). In the wired case, the NiMoZn catalyst was deposited on a nickel mesh substrate, which was wired to the 3jn-a-Si electrode. In the wireless case, the NiMoZn catalyst was deposited directly onto the silicone electrode’s adjacent stainless steel surface. Baseline values for the 3jn-a-Si cell were obtained by operating the cell in a three-electrode voltammetry setup. Under a no-light condition, the cell had a current of less than .05 mA/cm2, a low value. Upon illumination of 1 sun (100mW/cm2) of air mass (AM) of 1.5, the cell produced a current of .39 mA/cm2 at a potential of 0.55V when using a 1M potassium borate electrolyte (pH 9.2). To increase the efficiency in this example, they added the 0.25 mM Co2+(aq) catalyst. The Co-OEC increased the current to 4.17 mA/cm2. Bubbles at both electrodes indicated the formation of oxygen and hydrogen gas. The experiment went on to study the effect of differing thicknesses of Co-OEC films on the electrodes’ surfaces. It was found that at a 5 minute deposition time, the thin (85 nm) film created photoanodes with optimum performance. There is a give and take between Co-OEC presence and performance of the cell, because though it increases activity, the built up film also blocks incident radiation. When they tested the wireless cell, it was found to be stable for 10 hours. It was discovered that the stability of the wireless cell is dependent on the type of conductive oxide barrier layer used.

Though there is still work to be done to increase the efficiency and stability of these PEC cells, Steven Reece and his team has succeeded in replicating the photosynthetic functions of a leaf. Additionally, the fact that they have done so in a relatively neutral pH means that hydrogen and oxygen fuels could be generated without a membrane, as the two gases are very insoluble at a neutral pH. Lastly, the most notable aspect of this research is that they have replicated photosynthesis with low-cost, earth-abundant materials. The continued research in this area may very well spark an energy revolution, as the decreased overhead will incentivize investment in solar technology

Putting the “Smarts” into the Smart Grid: A Grand Challenge for Artificial Intelligence

The developed world has grown to its current state of monetary wealth because of the exploitation of cheap energy in the form of fossil fuels.  However, oil is becoming increasingly scarce, with peak production predicted to be reached within twenty years.  To further complicate matters, much of the oil reserves that remain lie in environmentally and politically vulnerable areas.  This makes oil subject to a higher chance of supply side shocks which drive energy prices higher.  Once we do run out of oil, we will need to transition to mostly renewable electrical energy sources such as wind, solar, and tidal energy.  Current projections already estimate the world’s electrical energy demand will increase by 76% by 2030, based on 2007 levels.  In order to accommodate the increased demand in electricity, the grids upon which the transmission of electricity depends must be adapted to the renewable infrastructure.  The new “smart” grids must both integrate widely distributed generators with varying outputs, and manage prosumers, who consume and produce electricity based on their local conditions and requirements.  Perhaps the most striking obstacle in the implementation of a new grid system is the artificial intelligence (AI) that must be developed to control energy flow, as investigated by (Ramchurn et al. 2011).—Donald Hamnett
Ramchurn, S., Vytelingum, P., Rodgers, A., Jennings, N., 2011. Putting the “smarts” into the smart grid: a grand challenge for artificial intelligence. University of Southampton, 1–9.

            Ramchurn and his colleagues organized the challenges involved with creating the smart grid’s AI into five categories:  demand-side management, electric vehicles, virtual power plants, energy prosumers, and self-healing networks.  Each of these possesses its own obstacles.  The researchers took into account the current state-of-the-art technologies to determine the AI advances that must be made to incorporate the aforementioned categories.
            In order for the grid to be safe and efficient, it must perfectly balance supply and demand.   The current grid adjusts supply to meet power demand, but in a renewable grid with varying output, demand-side management would help to reduce demand when energy supply is insufficient.  Artificial Intelligence that responds to price levels, owners’ preferences, and constraints on the grid could be used to flatten this demand curve.  For example, appliance use could be set to a timer which runs during low demand hours.  Also, both individuals on the consumer end and operators of the grid must have the ability to control and predict energy use.
            Another consideration of the smart grid is the future use of the electric vehicle (EV).  These vehicles put a large load on the grid due to the need for a rapid charge capable of a reasonable range of travel.  A system to predict individual and aggregate charging demands of EVs, and incentives to decentralize charging would need to be provided by AI to prevent tripping transformers.
            In the smart grid, virtual power plants (VPPs) are one proposed method of combining heterogeneous actors into aggregates.  AI would be necessary to model the complex interactions of a grid to form VPPs.  One of the difficulties in this regard is that on a constrained grid, individual actions affect all other parties in the grid.  A fair profit-sharing outcome would need to be reached with the most efficient VPPs possible being formed.
            The emergence of the energy prosumer with renewable energy on the smart grid requires AI that can predict prosumer profiles.  The consumption and generation prediction could then be used with price predictions to inform energy trading.  Profit maximization could be reached with such predictions, as could human-grid interactions that take into account prosumers’ preferences.
            A self-healing network requires real-time information to be shared between different nodes, which can coordinate to balance supply and demand.  The AI would need to estimate voltage and phase distribution given prosumers’ demand and supply.  Lastly, the AI would need to make predictions accurately even when faced with incomplete information.
            Switching to renewable energy is not only a matter of energy production, but also one of infrastructure and technology that are much more sophisticated than the current grid, including Artificial Intelligence.

Enabling Greater Penetration of Solar Power via the Use of CSP with Thermal Energy Storage

Due to concerns about dwindling fossil fuel reserves and climate change, and to the falling costs of solar photovoltaic (PV) energy, solar power is being increasingly integrated into power grids. Unfortunately, several factors involving the composition of current grids limit solar power’s ability to be fully utilized. One such limitation is that the peak hours of solar availability do not coincide with the peak hours of demand. Another limitation is the current grid’s inflexibility, or inability to increase or reduce output from conventional energy sources to coincide with variable fluctuations in solar output. A solution to this problem has been proposed in the form of thermal energy storage (TES) deployed with concentrating solar power (CSP) (Denholm and Mehos 2011). This technology complements PV by storing otherwise wasted solar capacity as heat, which can be stored and used as output during periods where PV generation is minimal. Furthermore, PV with CSP stabilizes solar output to a firmer, more dependable source less subject to random fluctuations. This allows for solar to replace a portion of the grid currently occupied by conventional energy generation, and thus furthers the percentage of potential energy output due to renewables.—Donald Hamnett

Denholm, P., Mehos, M, 2011. Enabling greater penetration of solar power via the use of CSP with thermal energy storage. National Renewable Energy Laboratory. 1–28.

Paul Denholm, Mark Mehos, and colleagues at the National Renewable Energy Laboratory used a REFlex model to predict the ability of CSP to increase grid flexibility and solar penetration in the Southwestern United States. This model compares the hourly load with renewable resources to calculate energy curtailment, based on the grid’s flexibility, or ability to change generator output to accommodate variable renewable energy sources. To determine the limits of PV, the researchers used weather data from 2005 and 2006 in the System Advisor Model (SAM), which converts the data into hourly PV output. These data were in turn used to model the interaction between solar and wind generated energy, using simulated data for 2005 and 2006 from the Western Wind and Solar Integration Study (WWSIS). Simulations of energy generation per hour were conducted over a year and the four-day period, April 7-10, was displayed as the paper’s example. The area under the curve was split into different sections representing contributions from the various energy sources. Simulations were run for the current PV system with an assumed 80% flexibility, and a PV with CSP system. In the latter example, CSP was added to the REFlex simulation using SAM produced hourly generation values.

The simulation under a PV-only scenario showed that a significant proportion of annual PV production, 5%, is curtailed due to a lower energy demand, since energy demand is low when PV output is high, and other generators cannot slow output down at fast enough ramp rates. In this situation, the PV curtailment and cost increase exponentially as the percentage of energy from PV increases. Clearly, this poses an obstacle to switching to an increasingly renewable grid because it puts a limit on the proportion of solar energy that is practical. In order for this PV to be utilized, a more flexible grid is required. When the researchers included CSP into the REFlex model, the data suggested that flexibility improved and curtailment reduced.

The CSP model was based on wet-cooled trough plant technology (Wagner and Gilman 2011). In this scenario, solar energy that would have otherwise been curtailed during low load hours was instead stored thermally during the day. At the same time, PV energy was incorporated into the grid; however, as PV lessened in the night hours and load increased, the stored CSP energy was run through the grid. Energy that would otherwise have been wasted during low load hours could now be stored and used during peak hours. This decreased the annual solar curtailment to less than 2%. Also, it increases the solar energy contribution to 25%, 15% PV and 10% CSP.

Denholm and Mehos also found that CSP/TES helped to lower the baseline amount of conventional energy needed, because CSP allows for solar energy to be utilized at all hours. This result has cascading effects, as the implementation of CSP lowers the minimum conventional generation requirements. CSP allows for additional solar power to be added at much lower marginal curtailment rates. Lowering the minimum load that relies on conventional energy opens up greater probabilities of increased solar and wind components. The researchers, through their complex grid analysis, determined that CSP technology has the ability to both increase solar efficiency and the possibility of the implementation of greater variable energy inputs.