Log in
  • 25 Oct 2021 8:30 AM | Anonymous member (Administrator)

    Author: This article is part of an expert piece written by Dr. Charbel Rizk, the Founder & CEO of Oculi® - a spinout from Johns Hopkins - a fabless semiconductor startup commercializing patented technology to address the huge inefficiencies with vision technology. In this article, Dr. Rizk discusses the hypothesis that he and his team have developed: Efficient Vision Intelligence (VI) is a prerequisite for effective Artificial Intelligence (AI) for edge applications and beyond. 

    Despite astronomical advances, human vision remains superior to machine vision and is still our inspiration. The eye is a critical component, which explains the predominance of cameras in AI. With mega-pixels of resolution and trillions of operations per second (TOPS), one would expect vision architecture (camera + computer) today to be on par with human vision. However, current technology is as high as 40,000x behind, particularly in terms of efficiency. It is the combination of the time and energy “wasted” in extracting the required information from the captured signal that is to blame for this inefficiency. This creates a fundamental tradeoff between time and energy, and most solutions optimize one at the expense of the other. 

    We remain a far cry from replicating the efficacy and speed of human vision. So what is the problem? The answer is surprisingly simple: 

    1. Cameras and processors operate very differently relative to the human eye and brain, largely because they were historically developed for different purposes. Cameras were built for accurate communication and reproduction. Processors have evolved over time with the primary performance measure being operations per second. The latest trend is domain specific architecture (i.e. custom chips), driven by demand from applications which may see benefit in specialized implementations such as image processing. 

    2. Another important disconnect, albeit less obvious, is the architecture itself. When a solution is developed from existing components (i.e. off-the-self cameras and processors), it becomes difficult to integrate into a flexible solution and more importantly dynamically optimize in real-time, a key aspect of human vision.

    Machine versus Human Vision

    To compare, we need to first examine the eyes and brain and the architecture connecting them. 

    The eye has ~100x more resolution, and if it were operated like a camera it would transfer ~600 Gb/s to the brain. However, the eye-brain “data link” has a maximum capacity of 10 Mbits/sec. So how does it work? The answer is again simple: eyes are specialized sensors which extract and transfer only the “relevant” information (vision intelligence), rather than taking snapshots or videos to store or send to the brain. While cameras are mostly light detectors, the eyes are sophisticated analysts, processing and extracting clues. This sparse but high-yield data is received by the brain for additional processing and eventual perception. Reliable and rapid answers to: What is it?Where is it? and eventually, What does it mean? are the goals of all the processing. The first two questions are largely answered within the eye. The last is answered in the brain. Finally, an important element in efficiency is the communication architecture itself. The eye and the brain are rarely performing the same function at any given moment. There are signals from the brain back to the eye that allow the two organs to continuously optimize and focus on the task at hand. 

    Efficient Vision Intelligence (VI) is a prerequisite for effective Artificial Intelligence (AI) for edge applications 

    Everyone is familiar with the term Artificial Intelligence, but what is Vision Intelligence (VI)?

    It accurately describes the output of an efficient and truly smart vision sensor like the eye. One that intelligently and efficiently selects and transfers relevant data at a sustainable bandwidth. Biology demonstrates that the eye does a good deal of parallel pre-processing to identify and discard noise (data irrelevant to the task at hand), transferring only essential information. A processing platform that equals the brain is an important step in matching human perception, but not sufficient to achieve human vision without “eye-like” sensors. In the world of vision technology, the human eye represents the power and effectiveness of true edge processing and dynamic sensor optimization.   

    Efficient Vision Technology is safer and preserves energy  

    As the world of automation grows exponentially and the demand for imaging sensors skyrockets (cameras being the forerunners with LiDars and radars around the corner), vision technology which is efficient in resources (photon collection, decision time, and power consumption) becomes even more critical to safety and to saving energy. 

    On safety, a vivid example would be pedestrian detection systems, a critical safety function ripe for an autonomous capability, but current deployed solutions have limited effectiveness. To highlight the challenges with conventional sensors, consider cameras running at 30 frames (or images) per second (fps). That corresponds to a delay of 33 ms to get one image and many are usually required. To get 5 images, the vehicle at 45 mph would have traveled the length of a football field. That “capture” delay can be reduced with increasing the camera speed (more images per second) but that creates other challenges in sensor sensitivity and/or system complexity. In addition, night time operation presents its own unique challenges and those challenges increase with the sampling speed.

    Real-time processing would also be necessary to not add more delay to the system. Two HD cameras generate about 2 Gbits/sec. This data rate, when combined with the associated memory and processing, causes the overall power consumption for real-time applications to become significant. Some may assume that a vehicle has an unlimited energy supply. But often that is not the case. In fact, some fossil fuel vehicle companies are having to upsize their vehicles’ engines due to the increased electric power consumption associated with ADAS. Moreover, with the world moving towards electric vehicles, every watt counts.  

    If we were to think beyond our edge applications and look at the power cost of inefficient vision technology in general, the findings may surprise the reader. Recent studies estimate that a single email costs 4 grams of CO2 emission and 50g if it includes a picture, which is exactly the problem with vision technology today. It produces too much data. If we consider a typical vision system (camera+network+storage+processing) and assume, conservatively, a total power consumption of 5 Watts and that roughly 1 billion cameras are on at any given time, this translates to a total power consumption of 44 Terawatt-hr/yr. This is more than 163 out of 218 countries and territories, or mid-way between the power consumption of Massachusetts and Nevada. In the age of data centers,  images, and videos, “electronics” will soon become the dominant energy consumers and sources of carbon emissions in the future.

    Machine vision is never about capturing pretty pictures but it needs to generate the “best” actionable information very efficiently from the available signal (photons). What this means is optimizing the architecture on edge applications, which by nature, are resource constrained. This is exactly what nature provided in human vision. Biological organs such as the human eyes and brain operate at performance levels set by fundamental physical limits, under severe constraints of size, weight, and energy resources—the same constraints that tomorrow’s edge solutions have to meet. 

    There is significant room for improvement still by simply optimizing the architecture of current machine vision applications, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. Before the world jumps to adding additional sensors to the mix, the focus should be on structuring the system in an optimal way to allow for the power of machine vision to approach that of human vision. 

  • 21 Jun 2021 2:30 AM | Anonymous member (Administrator)

    Author: This article is part of an expert series written by Fadi Daou, the CEO of MultiLane – a high speed test and measurement (T&M) company based in Lebanon. Daou discusses the move from 400G to 800G Ethernet at the leading edge of data communication, the  challenges and solutions at these high speeds and throughputs, and  Lebanon’s role in the industry.  

    [Disclaimer: The below article is the author's personal opinion]

    The COVID-19 pandemic has accelerated the already dramatic shift to online spaces in every aspect of our lives. From Netflix streaming, to Zoom calls, to sharing documents on Office365 or Google Docs, we are now using more bandwidth than ever. The speed at which data centers can communicate internally correlates directly to how fast they can provide their services. Increasing speeds at the user end – with 5G  or faster home WiFi for example – requires exponentially faster speeds at the server end. To  accommodate for this ever-increasing demand, hyperscalers – like Google, Amazon, and Microsoft – are constantly working on faster, more efficient technologies. 

    Data centers operate on the most fundamental layer of the internet, the physical layer, which deals  directly with streams of bits – 1s and 0s – transferred via electrical, or, more frequently, optical signals. On this layer, different technologies that enable the rapid transfer of data come together under the  Ethernet Specification.  

    The year 2021 has seen widespread adoption of 400 Gigabit Ethernet (400G), currently the fastest commercially available means of transferring data. Companies like Microsoft are migrating their data center infrastructure from 100G to 400G in anticipation of increasing bandwidth usage over the next five years, but other hyperscalers are pushing to go even faster.  

    At the leading-edge of data communications, we must always operate in anticipation of future technologies that may be two, three, and even four years in advance of what is available now. If 400G is being adopted now, then it is a certainty that the next stage, 800G Ethernet, is no longer a technology of the future but of the present, with prototypes and standards already in development.  

    The rapid approach of 800G Ethernet is all the more certain given that 400G relies on revolutionary new technology, which has laid the foundation for 800G and beyond. As a full breakdown of these technologies is outside the  purview of this article, I will focus on two factors that are central to this revolutionary shift in data communications: the move from NRZ to PAM4 signaling and the need for heat dissipation.  

    A Tale of Two Signals

    Gigabit Ethernet works by sending one or more signals through a fiber optic cable at a certain speed. Previous generations would send information one bit at a time through a Non-Return to Zero (NRZ) signal. At 100G and below, NRZ is ideal as it can allow for error-free transfer of data. However, NRZ signals cannot  provide reliable throughput at 400G, which has caused a shift to, and heavy reliance on, a different type of signaling method: PAM4. PAM4 splits a single signal into four levels, each of which can transfer two bits per cycle. This allows for faster transfer of information and less interference at 400G and above.

    But PAM4 isn’t without its own challenges. 

    Such a busy signal means that noise, and therefore errors – instances where a 1 is interpreted as a 0 and vice versa – are inevitable. Companies looking to implement  400G and above must be able to have a keen awareness of what these errors are and how to account for them. Testing equipment, like Bit Error Rate Testers (BERTs), advanced oscilloscopes, and loopback modules are crucial as they ensure data center functionality. Modern testing instruments can even apply methods of mitigating errors directly to see how they might be implemented in the field. For PAM4 signals, the most common way to ensure no information is lost is Forward Error Correction (FEC), which appends the data with additional bits and codewords that allow for a certain amount of data recovery even with errors in the signal.  

    Keeping Things Cool

    Processing so much information at a time causes significant heat buildup in the pluggable modules, which, if improperly dealt with, can damage the equipment. Heat dissipation through these modules is, therefore, essential to the functioning of the entire system. Here, test and measurement equipment once again plays a vital role. The interconnects used to stress-test ports or systems, called loopbacks, run ports at their highest power threshold to see how they cope, and what additional cooling methods are required to allow for more effective heat dissipation.

    (Data Center Rack Being Tested)

    Connecting to Lebanon

    My expertise is in test and measurement, but my passion has always been to see my country thrive. 30 years ago, I promised my father amongst the olive and pine trees of my ancestral village that I would return to Lebanon when I was ready to create high tech jobs for my fellow Lebanese. Lebanon has never lacked for talent, only opportunity, and my goal is to bring these opportunities home. 

    Shifting Lebanon’s economic focus from internal to international would go a long way to solving our current crisis. All that is needed is a better ecosystem that enables global competition and moves our economy to double digit growth. Updates to our labor laws would prove very helpful, as it would provide proper incentivization for international companies. However even without them, Lebanon is still rising to the occasion remarkably, all things considered. 

    Initiatives like my own Houmal Technology Park (HTP) already stand as a testament to Lebanon’s capacity to compete on an international scale. Even in the midst of economic turmoil, bright young Lebanese are working to turn their country into a hub for the ICT industry. One of the companies headquartered at HTP,  MultiLane, is able to keep pace even with the lightning fast high-speed I/O industry, with an excess of 4000 interconnect modules shipped every week. Test and measurement instruments manufactured right here are being used in major data centers around the world. 

    Looking to the future, if our work continues as it has, I anticipate and continue to strive to create even greater growth and more local opportunities as the world starts to take notice of Lebanon’s untapped potential.

  • 16 Apr 2021 4:45 AM | Anonymous member (Administrator)

    Author: Ali Khayrallah has been working away at the G’s of mobile for many years. He leads a research team shaping the future of mobile technology at Ericsson in Santa Clara. He is currently focused on 6G efforts in the US with industry, academia and government.

    [Disclaimer: The below article is the author's personal opinion]

    Just as the main operators in North America are completing the first wave of 5G network rollouts and 5G phones are becoming mainstream, we are starting to hear about 6G (or Next G, or whatever name sticks eventually). 

    Why so soon and what will it do for us? This article will try to give you a glimpse of some answers.

    The long game

    History doesn’t quite repeat itself but it kind of rhymes. Each ‘G’ (for generation) of mobile from 2G to 4G has lasted about 10 years, and it seems 5G will too. So we can guess that the 6G era will start around 2030. What is less obvious to the general public is that the buildup also takes a decade, so the time to start working on 6G is now. As you will come to appreciate, this is truly a long game from early research to commercial deployment on a global scale. Each new G offers an opportunity for radical changes, unconstrained by necessary compatibility within a single generation. To get there, we need time: to do the research and mature the technologies that potentially drive changes; to integrate them into complex systems and figure out ways to exploit their potential; to reduce them to practice and understand their feasibility; to create standards that incorporate them; to design products and services based on those standards; and finally to deploy networks.

    I will first talk about what 6G is about then discuss how to get there, in particular standards and spectrum, as well as geopolitical factors that may help or hinder us.


    Photo credit

    6G: use case and benefits

    It is of course difficult today to pin down the technologies that will enable 6G networks or the use cases that will drive the need for them, but we can paint a big picture of where we might be headed. 

    We expect the trend towards better performance in customary metrics such as capacity, bit rate, latency, coverage and energy efficiency to continue, as it has in previous G’s. To that end, we foresee further improvements in workhorse technologies such as multi-antenna transmission and reception, in particular more coordination of transmissions across sites. Also, the insatiable appetite for more spectrum will continue to lead us to ever higher frequencies, into the low 100’s of GHz. The need for ubiquitous coverage will push for integration of non-terrestrial nodes such as drones and low earth orbiting satellites into terrestrial networks. The success of these various directions hinges on solving a wide array of tough technical problems.

    Networks will also need to evolve in other ways, such as trustworthiness, which entails the network’s ability to withstand attacks and recover from them. One aspect is confidentiality, which goes beyond protection of data during transmission to secure computation and storage. Another aspect is service availability, which requires resilience to node failure and automated recovery.

    We can also think of use cases that will create the demand for 6G. One use case is the internet of senses, where we expect the trend from smartphones to AR/VR devices and beyond that involve most of our senses, leading to a merge of the physical and virtual worlds and putting very tough latency and bit rate requirements on the network. Another use case is very simple and possibly battery-less devices such as sensors and actuators for home automation, asset tracking, traffic control etc. Such devices must be accommodated by the network with appropriate protocols. Yet another is intelligent machines, where the network provides specialized connectivity among AI nodes, allowing them to cooperate. Speaking of AI, it is also expected to increasingly pervade the operation of the network itself, moving down from high level control closer to signal processing at the physical layer.

    Setting up standards: why do we need them?

    It sounds so 20th century but there are very good reasons, the main one being mobility. In mobile communications we need well defined interfaces so network elements speak and understand the same language. Phones move around and they have to be able to connect to different networks. Within a network, components from different vendors have to work together. Standards define the interfaces to make it all work together, and they do much more, including setting the minimum performance requirements for phones and base stations. In practice, companies spend a lot of money and effort on interoperability testing to ensure their equipment plays well with others.

    Three main ingredients to 6G success (or failure)

    3GPP

    In the mobile industry, the main standards body is 3GPP, which issues releases about every 18 months. A release fully defines a set of specifications that can be used to develop products. For example, Release 15 (2018) provided the first specifications for 5G, primarily covering the signaling and data channels to support improved mobile broadband. One particularly useful feature is the so-called flexible numerology, which enables the same structure to be adapted for use over a wide range of frequency bands. Release 16 (2020) added several features, including unlicensed spectrum operation and industrial IoT. Release 17 currently under construction will include operation at higher frequencies, more IoT features and satellite networks. From where we stand today, we expect the first release with 6G specifications to be around 2028.

    3GPP standards enable mobile networks to flourish globally, making it possible to recoup the enormous R&D investments. Since the advent of 4G, there has been a single effective standard worldwide. Earlier, there were two dominant factions developing the CDMA and GSM families of standards. This split probably led to the failure of several large companies. In our industry, fragmentation is the F-word. I will revisit this in the context of current geopolitics.

    Spectrum

    Until recently, all mobile spectrum was in the low band (below 3 GHz), which has become jam-packed not only with mobile but many other services. The psychedelic colored spectrum map gives you a feel for it. With 5G, the floodgates have opened, with new spectrum becoming available in mid band (roughly 3 to 7 GHz) and high band (24 to 52 GHz). These higher bands are great because it’s possible to operate with wider bandwidths (in 100’s of MHz compared to 10-20 MHz in low band) and support higher rate services. But propagation characteristics in higher bands make for challenging deployment, as signals don’t travel well through walls etc. Moving into even higher bands in the 100’s of GHz will exacerbate this problem. Also, spectrum used by legacy systems will get gradually re-farmed for use by new networks. In addition, there is a push led by the FCC (Federal Communications Commission) to mandate spectrum sharing between networks and incumbent users such as radar as a way to accelerate spectrum availability. The CBRS band at 3.55 GHz is the leading example of this type of policy. Keep in mind that spectrum is our lifeline and we’ll take it and make the best of it wherever and however it’s available.

    Geopolitics

    The “trade is good” principle that has dominated government policies since the fall of the Soviet Union seems to be on its way out, being replaced by more nationally centered policies. In this context there is now keen awareness of the rise of China as a serious technological rival to the US and its allies. This has manifested itself to a full extent in telecom with all the recent attention on 5G and mobile networks as a strategic national asset.

    There is wide support in congress for big spending on technology R&D, including 6G, evidenced by several proposals under discussion around the National Science Foundation (NSF) alone. Their common thread is a multifold budget expansion and an increased emphasis on technology transfer. 

    In the private sector, the Alliance for Telecommunications Industry Solutions (ATIS) which represents the interests of the telecom industry in North America has launched the Next G Alliance to develop a roadmap towards 6G and lobby the government to influence policy and secure funding for R&D.

    This is all good on the national scale, but it may come back to bite us with standards fragmentation and the threat of losing the global market scale. Navigating this complicated landscape will be challenging and it will be fascinating to me to see how it all plays out over the coming years.

    Additional Resources:

  • 25 Nov 2020 6:45 AM | Anonymous

    This is the first part of a series on Executive Coaching and Leadership Development for professionals.

    Executive coaching has exploded in popularity in the last decade and today benefits from an army of passionate advocates that not only including the coaches but also the participants that have personally benefited from coaching and their organizational sponsors who witnessed its transformational power firsthand.

    Between 25 and 40 percent of Fortune 500 companies use executive coaches, according to the Hay Group (acquired by Korn-Ferry), a major human-resources consultancy. Lee, Hecht, Harrison, the world’s leading career management firm, derives a full 20 percent of its revenues from executive coaching. Manchester, Inc., a similar national firm, finds that about six out of ten organizations currently offer coaching or other developmental counseling to their managers and executives. Another 20 percent of companies plan to offer coaching within the next year. Today, Cisco, Google, Uber, Facebook, among others have created departments of internal coaching and hired some of the brightest executive coaching minds.

    There are many definitions of executive coaching, but two of the most straightforward definitions that we prefer to use are, “a relationship in which a client engages with a coach in order to facilitate his or her becoming a more effective leader” (Ely et. Al) and “the facilitation of learning and development with the purpose of enhancing effective action, goals achievement, and personal satisfaction.”

    While these definitions provide a broad description of its intended purpose, the following criteria are used to more strictly define executive coaching:

    1. One-on-one interaction between an executive coach and the client – as opposed to team coaching, team building, group training, or group consulting. Coaches and clients usually interact through live sessions, weekly or bi-weekly for 60 to 90 minutes.
    2. Methodology based – drawing on specific tools, methods, and techniques that promote the client’s agenda to uncover their own blind spots, identify their challenges, and develop their own goals.
    3. Structured conversations led by a trained professional – as opposed to more traditional mentorship that takes place between managers, HR professionals, and peers These conversations focus on identifying and strengthening the relationship between the client’s own development and requirements of the business. As the complexity of the business increases, and the expectations on leaders increase, they found themselves needing to develop new skills and behaviors while eliminating self-inhibitors.
    4. Task-oriented – Executive coaching involves important stakeholders beyond the client and the coach; the goals and future outcomes for organization are central to the process. By using a sequence of explorations and small goal-achievements, the coach helps the client take action constantly in small increments to create long-lasting behavioral changes and results for both the client and the organization.
    5. Long-term Impact – intended to enhance the person’s ability to learn and develop new skills independently. The model focuses on developing the client’s capacity, knowledge, motivation, insights, and emotional intelligence maturity in order to effect long-term benefits.
    There are also many areas of expertise in which executive coaches can support clients:
    1. Business Acumen – focus on a deep understanding of best business practices and strategies, management principles and behaviors, financial models, business models and plans, and startup life cycles. While business consultants are hired to provide business relevant answers, executive coaches with business acumen guide the clients to define their own challenges, and develop their own solutions that align with their career and organizational goals.
    2. Organizational Knowledge – focus on design, structure, power and authority, alignment, culture, leadership models, company goals achievement and leadership development. Complexities of organizational models are very invisible to the untrained eye, or for coaches with no prior relevant personal experience.
    3. Coaching knowledge – focus on coaching methodologies, competencies, practices, assessment, personal goals achievement, as well as being students of lifelong learning and behavioral improvement. While there are many leaders providing coaching to their peers and teams, the work of professional executive coaches within organizations involves unleashing the human spirit and expanding people’s capacity to stretch and grow beyond self-limiting boundaries.
    “Executive coaches are not for the meek. They’re for people who value unambiguous feedback. If coaches have one thing in common, it’s that they are ruthlessly results-oriented. ”, according to an article according to Fast Company Magazine. This quote defines the major boundary between executive coaching and the unstructured other areas such as advising, consulting, or peer mentoring.

    In the next part of this series, we will explore the challenges and learnings on how to become a rock-star leader.

    Main image via Pexels.

    As an Executive Coach, Elie Habib guides CEOs, entrepreneurs, and senior executives toward performance excellence and acceleration of their career aspirations.

    He serves as a thought partner in guiding leaders to address their most complex leadership challenges.

    Elie is CEO of MotivaimCoach, Lebnet co-founder, Investment Committee member of MEVP’s Impact Fund (Lebanon), and prior corporate executive and CEO/founder.

  • 25 Nov 2020 6:41 AM | Anonymous

    This article is part of an expert series written by industry experts. In this part, Nadim Maluf, the CEO of Qnovo Inc, discussed the impact of the electrical grid and lithium-ion batteries breakthrough.

    The Royal Swedish Academy of Sciences awarded on 9 October 2019 the Nobel Prize in Chemistry to three scientists for “the development of the lithium-ion battery.” It was a long overdue recognition for John Goodenough, Stanley Wittingham and Akira Yoshino, and for the thousands of engineers and scientists who have made rechargeable batteries a pillar of a mobile society.

    Any person around the globe can associate lithium-ion batteries as the main power source in their smartphones or laptop computers, and increasingly, in new generations of electric vehicles. If you drive one, like a Tesla, you are quite fluent about its capabilities and limitations. Yet, few recognize how central lithium-ion batteries have become to our global economies — and the extent to which the “green revolution” relies on energy storage and battery systems. The purpose of this article is to shed some light on the underlying technologies and applications, both present and future.

    In many respects, a lithium-ion battery is a simple device. It has two electrical terminals: positive and negative. Yet, in many other respects, it is complex or evokes a sense of complexity because it involves “chemistry,” a topic of inimical memories to many college graduates.

    The basic structure of a Lithium-Ion battery

    In its most basic form, a lithium-ion battery consists of three sandwiched layers rolled together inside a package: an anode, a cathode, and a porous separator in between. During charging, lithium ions travel from the cathode to the anode through the pores of the separator. The opposite occurs during discharging.

    The battery inside your smartphone looks very much like the one described above. The battery inside an electric vehicle consists of hundreds — or in some cases thousands — of individual batteries (called cells) electrically connected together to provide more electrical charge and energy.

    Stored energy determines the life of the battery, i.e., the duration of time the energy may be available to a user. The basic unit of energy is the watt-hour, or W.h. The energy capacity of a small smartphone battery is about 15 W.h., sufficient to power a device for a day. That of an electric vehicle is nearly 100,000 W.h, often written as 100 kWh. This amount is sufficient for a driving range of approximately 500 km – or 5 hours at highway speeds. Batteries intended for the electric grid store a far larger amount of energy, typically several million W.h, or MW.h.

    The number of times the battery can be charged and discharged is called “cycle life.” In principle, charge-discharge cycling should occur indefinitely but degradation of structural materials within the battery limit its lifespan to less than 1,000 cycles. That works well for most applications.

    Charge time is another measure of importance, especially for consumer devices and electric vehicles.

    As the ancient saying goes, there is no such thing as a free lunch. Stored energy, cycle life and charge time are all inter-related. For example, repeated fast charging may accelerate battery damage hence shortening its lifespan (or cycle life). Such complex interactions force manufacturers to optimize the design of the battery to its intended application.

    The success of lithium-ion batteries in modern times is largely due to their favorable economics. The cost of batteries plummeted in this decade from US $1,000 US per kWh to nearly $100 per kWh. Forecasters predict that electric vehicles will reach cost parity with traditional combustion-engine cars by 2024. Combined with government regulations on greenhouse gas (GHG) emissions, it is inexorably transforming the automotive and transportation industries.

    Beyond consumer devices and electric vehicles, electric utilities are exploring the use of large-scale lithium-ion batteries for their grids. Many are familiar with pairing batteries to residential solar panel installations for the purpose of going off-grid. The reality is that such an application is limited in appeal to affluent suburban or rural areas; dense urban geographies will remain dependent for the foreseeable future on electric utility companies.

    Several utilities around the globe are piloting the use of lithium-ion batteries to offset a timing imbalance, dubbed the “duck curve,” between electric power demand and renewable energy production. Solar power peaks in the afternoon hours causing traditional fossil fuel power plants, namely gas-powered turbines, to throttle down their production. Yet these turbines need to ramp up rapidly again in the evening to make up for rising power demand after the sun sets. This steep decline in traditional power generation in the afternoon followed by a rapid ramp in the evening causes significant stress on the grid and worse greenhouse gas emissions.

    Enter lithium-ion batteries. They soak up the excess solar energy generated during daylight and then deliver it after the sun goes down. The result is a flatter power generation profile for traditional fossil fuel power plants with improved operating efficiencies, lower GHG emissions and better economics.

    The California Energy Commission approved in 2018 a mandate to install solar panels on all new single-family homes constructed after 2020. Guaranteeing a steady rise in future use of solar energy, batteries become a critical component in integrating renewable sources of energy with the traditional grid.

    Duck Curve: Timing imbalance between peak demand and renewable energy production in California.
    (Source: California Independent System Operator CAL ISO)

    Traditional grids historically consisted of large power production plants in distant locations and extensive transmission grid lines to transport the power to large urban areas.

    Power plants adjusted their energy outputs to match the exact demand at that moment in time. Future grids will evolve to more distributed designs integrating renewable energy sources (e.g., solar, wind) in proximity to or within urban boundaries, with energy storage systems (to store energy when it is generated and releasing it when it is needed).

    California leads the nation in energy storage with 4,200 MW of installed capacity — enough to power nearly 1 million households. California Senate Bill SB100 mandates that the state receives all its energy from carbon-neutral sources by 2045. Both the state legislature and the California Public Utilities Commission (CPUC) have imposed specific energy storage targets for investor-owned utilities operating across the state.

    Looking out to the next decade, energy storage and batteries will become central to global energy and transportation policies. It is no surprise that forecasters estimate the market for lithium-ion batteries to be in excess of $300 Billion by 2030.

    Main Image via Pexels


LebNet, a non-profit organization, serves as a multi-faceted platform for Lebanese professionals residing in the US and Canada, entrepreneurs, investors, business partners in a broad technology eco-system, and acts as a bridge to their counterparts in Lebanon and the rest of the Middle East

© 2021 LEBNET. ALL RIGHTS RESERVED

205 De Anza Blvd., #315, San Mateo, CA 94402, USA. +1.650.539.3536

Powered by Wild Apricot Membership Software