top of page

Search Results

62 items found for ""

  • Voltage Drop and Losses

    Voltage Drop is the loss of voltage in a non-ideal conductor. In other words, conductors have an impedance, a resistance and a reactance, that causes the voltage magnitude at the supply end of a conductor to be different than at the load end of a conductor when there is current flowing. Voltage drop can become a serious design issue. If we were to lose 50% of the voltage in a cable before it reaches the load, we can't reasonably expect a load to operate correctly. Voltage Drop in a Nutshell The National Electrical Code addresses voltage drop only through informational notes (sections that aren't enforceable). It says that branch circuits should have a voltage drop less than 3%, and that total circuits (feeders + branch circuits) should have a voltage drop less than 5% to generally be okay. The reason this information only exists in information notes is twofold. First, the NEC isn't a guide for good designs, only a guide to keep designs safe. If voltage drop is too high, your installation probably won't become unsafe; it just won't work the way you want it to. Second, the NEC requires electrical equipment to be installed in accordance with its instructions. If equipment has its safety impacted by a voltage range and this is required in the instructions, then this is also required by Code (See Article 110). Voltage drop calculations differ for DC circuits, AC single-phase circuits, and AC three-phase circuits. The relevant formulas for each are: DC Circuits: V = 2 I L rdc AC Single-Phase Circuits: V = 2 I L z AC Three-Phase Circuits: V = √(3) I L z Where: V is the voltage drop across a conductor in Volts, as measured between line conductors I is the load current flowing through a conductor in Amps. Modifications for continuous loading should not be included. L is the one-way length of the circuit in feet. This length is the same if your circuit has 2 wires or 3 wires. rdc is the DC resistance of the conductor in Ohms/foot z is the effective AC impedance of the conductor in Ohms/foot The effective impedance z is dependent upon the power factor of the circuit, which may be difficult to know in advance, depending on the situation. A conservative value to be used for the effective impedance is: z = √( rac^2 + x^2) Where: rac is the AC resistance of the conductor in Ohms/foot x is the reactance of the conductor in Ohms/foot Values for conductor DC resistances and AC impedances (reactances and resistances) can be taken from NEC Chapter 9 Tables 8 and 9 for a wide range of installation methods. If available, manufacturer data is even better. Values from Chapter 9 are provided at an operating temperature of 75°C. If a different operating temperature is desired for more accurate computation of the voltage drop, the DC or AC resistance can be computed using the following equation: r' = r (1 + a (T - 75°C) ) Where: r' is the temperature-corrected resistance of the conductor r is the 75°C resistance of the conductor a is a temperature coefficient defined as .00323/°C for Copper and .00330/°C for Aluminum T is the desired operating temperature for correction in °C It is important to recognize the limitations of the values provided in the NEC. Chapter 9 Tables 8 and 9 are based on low voltage configurations (600V or less). Medium voltage circuits may have different values for resistance and reactance than those presented in Chapter 9 Table 9. Unlike low voltage circuits, medium voltage conductors often have metallic shields and dielectric materials that contribute to resistive losses in the conductor. Additionally, reactances can vary substantially based on the installation methods of the conductors. The values in Chapter 9 Table 9 will be approximately accurate for medium voltage conductors if installed in a configuration similar to the ones described for 600V circuits, but should not be used if a high level of precision is required. Example: What is the approximate voltage drop across a three-phase, 480V circuit running 1000' in PVC conduit with a load current of 200A and a conductor type of 500 KCMIL Copper operating at 75°C? Solution: Begin by identifying the correct type of circuit. Since this is a three-phase AC circuit, voltage drop is computed as: V = √(3) I L z Next, identify the obvious variables: L is the one-way length of the circuit, 1000' I is the load current, 200A Then, move onto the less obvious variable, z. We don't know the power factor, but we can approximate the effective impedance conservatively as: z = √( rac^2 + x^2) rac and x can be taken from NEC Chapter 9 Table 9 for 500 KCMIL copper conductors in PVC conduit. rac = .027 Ohms/kft and x = .039 Ohms/kft. Putting everything together: z = √( .027^2 + .039^2) = .0474 Ohms/kft V = √(3) * 200A * 1 kft * .0474 Ohms/kft = 16.4 V We can normzalize this voltage drop against the line-line voltage for the system, 480V, to get a sense if our voltage drop is acceptable: 16.4 / 480 = 3.4% This voltage drop exceeds the limits set by the NEC for branch circuits but is less than the limits for the branch circuit and feeder together. Depending on the equipment requirements, this may be an issue.

  • Margin-Designing for the Real World

    "Electrical engineers design to standards". This was a phrase one of my college professors shared with me in a professional practice course my senior year. In a sense, he was absolutely right. Whether you work in communications, electronics, power, or elsewhere, electrical engineers are designing to standards set primarily by the IEEE (Institute of Electrical and Electronics Engineers) and the NFPA (National Fire Protection Association). These rules help the world stay connected by ensuring that designs are consistent and meet minimum guidelines for quality and safety. Standards are developed by professionals for professionals, and are critical to the world we live in as technical professionals. Unfortunately, the minimum often isn't enough. The reality is that a factor of safety, otherwise known as design margin, is usually needed to ensure safety and quality of operation. There are a few reasons why this is the case: Standards are interpreted - Design standards often don't tell engineers everything they need to know to complete a project. For any meaningful design, the standards are subject to interpretation and engineering supervision. Conditions change - Often, projects are designed based on information gathered at the project onset (e.g. a geotechnical report covering things like soil thermal resistivity). Over the design life of the project, design conditions will change and likely become less favorable at some point. A reasonable design will consider these aspects to ensure longevity and safety. Construction isn't perfect - As a designer, it's easy to put something onto paper that minimizes project costs. However, the design also needs to be buildable. If engineering drawings require a project to follow unreasonable tolerances, it is unreasonable to expect good results. Many electrical designers are apprehensive to add factors of safety because they believe that the National Electrical Code is already conservative. In some cases this is correct. For residential calculations of loads, the values used for conductor sizing are often highly generous. In many cases, though, the Code is far from conservative. Medium voltage ampacity tables used in the NEC assume highly favorable soil conditions as a basis for design. Without additional derating and consideration of soil parameters, one may be inclined to undersize cables. Of course, engineers have to be careful about adding too much design margin. Adding margin beyond the amount necessary to ensure quality and safety just means adding cost; projects get more expensive and little to no benefit. It is the job of power systems engineers to solve this problem and choose the correct solution. Design margin doesn't come in the form of a simple number that gets added onto everything, it's an entire design philosophy. What is the maximum temperature you are designing to? How heavily will you load you conductors? What is your basis for withstand determination? All of these kinds of questions need to be answered to ensure a good design.

  • Working with Other Engineering Disciplines

    In the real world, electrical engineers don't work in isolation. For better or worse, there are always overlaps of engineering disciplines. For example, consider building a residential property like an apartment building. The electrical engineer's goal is to get power to every home in the complex. However, this electrical design will face many obstructions from other disciplines. Consider these obstacles and the disciplines that need to be considered: How does power get routed around the building? We will need to make sure to avoid HVAC ductwork and other mechanical systems. It's best to talk to a mechanical or architectural engineer about this. What about installing heavy equipment like transformers? Is the building rated to carry that weight? That's a question for a structural engineer. How do we get power into the building? If we're using an underground service entrance, we'll need to coordinate the route for this power with civil engineers to make sure we aren't passing through non-buildable areas like detention basins. These are just a few examples of the intricacies of cross-discipline coordination. And for a familiar topic at that! All kinds of infrastructure, industrial, and power projects face challenges like this. Here are some tips to ensure good coordination between engineers: Establish clear roles and responsibilities at the onset of the project. There's nothing worse than having a problem and not knowing who to talk to about it. By establishing responsibilities and open lines of communication at the beginning of a project, it becomes easy to figure out who has the solution. In our electrical world, it feels like we know everything, but it's easy to be confused about scope. Brainstorm conflicts in advance: It might sound silly but think of the things that can go wrong between disciplines! If the project is new and complicated, start with something familiar and ask if the same problems could pop up for your project. You may not understand the chemical, structural, civil, and mechanical needs of a random project, but you can start by sharing your concerns of what you do know (even if it's just with reference to something like the apartment complex above). Learn the basics of other disciplines: Electrical engineers shouldn't be expected to do calculations for other engineering disciplines (nor is it ethical). That shouldn't stop you from learning more about these other disciplines though. When we understand the key facets of other engineering disciplines we gain two main benefits. First, we can design our power systems with the needs and limitations of others in mind, saving time and rework. Second, we can build a stronger technical relationship with our counterparts. If we understand the challenges others are going through, it makes it easier for them to empathize when we have a problem.

  • Building a Better Schedule

    Schedules are the lifeblood of a successful project. Big or small, expensive or lower-budget, setting a good schedule helps you understand where you stand in the project process and if you are really meeting your goals. For larger projects tied to financing for deliverables at certain points in time, a schedule may be VERY important. The three most common ways I have seen schedules created is as follows: Use the median/average estimate of how long it takes to complete a task. For example, if a morning meeting typically takes me 30-60 minutes, I budget 45 minutes for my schedule. The benefit of this method is simplicity. On average, this task takes a certain amount of time so you budget for that amount of time. If you were to repeat this task (e.g. the morning meeting) many times, you would likely end up on schedule by the end of the project. This could expose you to some big schedule risk (delays) as well, especially if a task has a big range of time it could take. Use the worst-case estimate of how long it takes to complete a task. For that same meeting in point 1, now I budget 60 minutes. This is conservative and guarantees that I don't mess up any downstream schedule events. Conversely, the estimate may be so conservative that I don't have nearly as much bandwidth to get things done during the day. Nonetheless, this is still a nice and simple solution. Use the median/average estimate with float. The simple solution to getting the best of both worlds is to build a baseline schedule off of the expected (average/median) values for how long it takes to complete a task. Then, "float" (free days added into the schedule for conservatism) are added in based on the risk that things take longer than expected. Often times, we would like float to be based on some mathematical relationship to the data at hand, but it tends to be based on instincts and mandated deadlines. Of the three methods described above, option 3 is generally the best option for developing a schedule on a project with high stakes. Still, it can be better. A little bit of complexity can go a long way in preparing a team to understand the potential for delays. Instead of thinking of medians and maximums, we should be thinking in terms of percentiles and outliers. When high quality records are kept of historical data, we can see the time it takes to do something as more than just a single point of data. For example, the histogram below shows the time it takes for 50 hypothetical teams to complete a task. The 25% value is 2 hours, the 50% value is 3 hours, the 75% value is 4 hours, and the maximum value is 13 hours (wow, they're slow!) If we were to assign a task duration according to method 1 above, we would likely say the task takes 3-4 hours to complete (the median is 3 hours and the mean is 3.86 hours). If we were to assign according to method 2 and reduce our risk, we would say that 13 hours is needed. However, neither data point is a great option for a schedule. The distribution has too much of a tail, with 13 hours being a very unlikely value to see and 3 hours not providing any contingency. A better way duration estimate comes from a "trimmed" maximum. Instead of taking the 100% value for a conservative schedule plan, first remove outliers from the distribution and then look at the data again. If we use the classic definition of an outlier: (Outliers) > (Median) + 1.5*(Inter-Quartile Range) Then the maximum value we expect to see after removal of outliers is 6 hours. This is the value I would use in my schedule. It's reflects the upper bound, but without getting out of hand. And, most importantly, it's rigorous and consistent. Personal bias on specific tasks, project history, and clients can lead us to skew our schedules, so maintaining a consistent methodology is of paramount importance. Any number of other trimmed-value approaches can work based on personal preference and risk aversion, but the important part is looking at the data as a whole. Percentiles and spread in data are just as important as point values of the data. Building schedules can seem like a mundane task, but doing it right can make the difference between a successful project and a failure.

  • Thermal Resistivity and Soil Dryout

    When reading through NEC ampacity tables you'll notice that many make reference to an assumed "thermal resistivity of 90°C*cm/W". What's up with that? Thermal resistivity is a characteristic of any material to dissipate heat. Higher thermal resistivities mean that the material behaves more like a thermal insulator, keeping heat from flowing out and leading to elevated temperatures. Lower thermal resistivities mean that the material behaves allows heat to flow through more readily-think of something like a heat sink or a radiator. Soils are no exception to this characteristic. NEC Annex B provides some values for designers to reference: coastal damp soils are usually closer to 60°C*cm/W, average soils are 90°C*cm/W, and dryer, rockier soils are usually closer to 120°C*cm/W. Essentially, the code is filled with references to ampacities at 90°C*cm/W because that is an average value for soils around the United States. For a lower thermal resistivity soil (< 90°C*cm/W), conductor ampacity increases. Similarly, for a higher thermal resistivity soil (> 90°C*cm/W), conductor ampacity decreases. There's a lot more to this story than just that. Depending on the ground conditions, primarily moisture content, soils could vary from thermal resistivities below 50°C*cm/W to well over 300°C*cm/W. That leads to some big differences in ampacities. Organic materials (grass, plants, etc.) will have much higher thermal resistivity than soil and can create air pockets that make heat transfer even worse. That's why it's important to make sure soil backfilled around cables is properly inspected. So, given how much thermal resistivity can vary, are NEC ampacity tables even reliable? Well, yes, they are! Below is an example thermal resistivity dryout curve. For most moisture content values, including the natural value, soil thermal resistivities are close to or less than the NEC's value of 90°C*cm/W. Values skyrocket as moisture moves below what is known as the "critical moisture value". For the example below, the critical moisture content is approximately 4%. Values can easily approach 3x the NEC value of thermal resistivity and dramatically impact cable sizing. As long as we can stay away from that runaway dryout condition, NEC tables should remain a conservative method for sizing conductors. But how do we keep soils from drying out? Dryout is unlikely to occur if the external temperature of the conduit or cable that is in contact with the soil is kept low. How low is reasonable can be difficult to determine and site-specific assessment likely requires detailed geotechnical analysis. IEEE 141 makes a general recommendation of 60°C-70°C. Some conduits, like PVC, have very high thermal resistivity and will create a significant temperature differential between the surrounding soil and the conductors inside. Directly buried cables that have to pass through conduit will generally be much cooler when in direct contact with the soil than when in conduit, also lowering the potential for dryout. Dryout is a real problem and can substantially lower conductor ampacities. In some areas with naturally low moisture contents, dryout may be unavoidable and NEC tables for ampacity may not be accurate. A detailed geotechnical analysis and considerations of conductor loading will mitigate this risk.

  • Heat Transfer

    Basics - Heat transfer isn't what comes to most people's mind when they hear "electrical engineering", but for power systems engineers it's a critical part of their design process. Without an understanding of basic heat transfer mechanics, we can't learn how to size cables correctly. First off, what is heat transfer? When I say heat transfer, I'm referring to the movement of heat (thermal energy) from one location to another. This transfer of heat coincides with temperature gradients between those points. Heat transfer occurs through three mechanisms: Conduction: heat is transferred through a solid material Convection: heat is transferred via circulation of a fluid (think of the phrase "heat rises") Radiation: heat is transferred via emission of electromagnetic radiation Heat transfer via radiation and convection are dominant factors in aboveground conductor sizing. These effects, especially convection, can be difficult to characterize mathematically without empirical analysis. Fortunately, power systems engineers are not as concerned with this situation. Aboveground, the effect of mutual heating between conductors is small. Circuits can be considered to be thermally independent of one another when they are separated by only a couple of cable diameters. Therefore, we can just use ampacity tables taken from the NEC or IEEE. We don't have to calculate ampacities for each specific aboveground configuration. Underground, the rules are very different. Conductive heat transfer applies for conductors buried underground. The earth acts as a thermal resistance, a medium for preventing the flow of heat. This leads to mutual heating between conductors underground, even at large spacings. Often, spacings of 10' or more are required to achieve effective thermal independence. Thermal Resistance - Thermal resistance is an analog of electrical resistance to describe the relationship between heat flow per unit time Q' and the temperature difference across the resistance ΔT. Mathematically: R Q' = ΔT Where, for a 2-dimensional geometry like a conductor buried in the earth: R is the thermal resistance of the material (earth, insulation, etc.), typically measured in °C*ft/W Q' is the heat flow per unit time, typically measured in W/ft ΔT is the temperature difference between the heat source and the ambient, measured in °C Although this equation has been referenced to underground burial of conductors, this same physical equation applies to any situation where heat is transferred via conduction, like the insulation in your home. Insulation has a high thermal resistance In this analogy, Q' behaves like a current source in an electrical circuit and ΔT is like a voltage difference. Thermal resistances in series add and thermal resistances in parallel offer a lower combined thermal resistance than either individually. Thermal Circuit with Series Resistances R1, R2 Neher-McGrath Equation - We can take the equation for conductive heat transfer above and expand it to consider the particular case of a cable buried in the earth. For a conductor underground, heat is created by current flowing through the resistance of the cable: Q' = r I^2 Where: Q' is the heat flow of the conductor in W/ft I is the load current in Amperes r is the AC resistance per unit length of the conductor in Ohms/ft This particular heat source can be substituted into the conductive heat transfer equation above and re-arranged to obtain the following result: I = √( ΔT / ( r R ) ) This equation is the simplest form of the Neher-McGrath equation described in the NEC, IEEE, and elsewhere. Instead of solving for temperature gradients based on the heat from a cable, we solve for the maximum load current I given a limit on the temperature rise ΔT. This slightly different formulation of the conductive heat transfer problem is a very common problem for power systems engineers to solve. The Neher-McGrath equation above is easy to write down, but challenging to apply because of the thermal resistance R. Computing the thermal resistance of any particular cable configuration and construction type is a laborious process outlined by Neher and McGrath in their original 1957 paper. The thermal resistance is dependent upon the geometry of the situation. Large contact areas over which heat can transfer make thermal resistance smaller. Thicker insulation that the heat has to pass through increases the thermal resistance. Additionally, thermal resistance is dependent upon a material property known as the thermal resistivity ⍴, measured in °C*cm/w. Thermal resistances of things like the earth, conduit walls, and conductor insulation are all proportional to the thermal resistivity ⍴. Today, numerous software programs exist that can speed up the process of calculating ampacities and thermal resistances. Example: Solve for the ampacity of a single conductor directly buried in earth with the following properties: Thermal resistance of the insulation: 10°C*ft/W Thermal resistance of the earth: 40°C*ft/W Ambient Earth temperature: 20°C Conductor maximum operating temperature: 90°C AC resistance of .0003 Ohms/ft Solution: Begin by recognizing that heat flowing out of the conductor will be required to pass through both the insulation and the earth, so the thermal resistances are in series. We can define the total system thermal resistance as: R = 10 + 40 = 50 °C*ft/W We can solve this problem by applying the Neher-McGrath equation for this particular thermal circuit. I = √( ΔT / ( r R ) ) = √( (90 - 20) / (50 * .0003) ) = 68 A

  • Constant S and Constant Z Loads

    Electrical power systems engineers, especially those working on systems at the distribution level, are generally required to complete load flow calculations for their projects. Load flow (also known as power flow) studies are used to analyze the steady-state voltage, currents, and complex powers that are present in an electrical system. The power flow calculation shows the designer how the system will behave under various conditions, like light and heavy loading. Engineers can use these results to ensure that there are not dangerous overvoltage or undervoltage conditions. Load flow studies rely on two main types of load models: Constant S and Constant Z. Constant Z models should be familiar for all electrical engineers, because it is just an application of Ohm's Law. In other words, a constant Z model assumes that the impedance of the load does not change over time, so the steady-state current drawn by the load is proportional to the voltage applied to the load. Constant Z loads are representative of devices like heaters, capacitors, and inductors. Mathematically, for a three-phase system: I = In * V / Vn Where: I is the line current of the load in Amperes V is the line-to-line voltage applied to the load in Volts In is the rated current of the load in Amperes Vn is the rated line-to-line voltage of the load in Volts Constant S loads are used to model devices which draw a constant complex power. For these loads, as the voltage drops the current increases. Constant S loads often represent motors near their full-load conditions. This model is simplified and obviously reaches a point of inaccuracy as the voltage at the load approaches zero. Mathematically, for a three-phase system: I = Sn / (√3 V ) Where the variables I and V are defined the same as above. The variable Sn is the rated three-phase complex power of the load. All discussions above are purely based on a discussion of magnitudes. Full complex numbers and load power factors should be used for detailed load flow modeling to ensure accuracy. As you can see, the current flowing to a load increases with higher voltages for an impedance load (Constant Z) while the current decreases with higher voltages for a motor load (Constant S). When both types of loads are present together on a bus, the results can be non-trivial.

  • Energy Storage Systems (NEC 706)

    Battery energy storage systems (BESS) have long been held as a vital part of the shift to renewable energy. Renewables like wind and solar PV are intermittent generation sources and are inherently unable to provide consistent power like generators with a fuel supply. Numerous chemistries exist, with benefits that vary from energy storage density (Lithium Ion to duration of storage (Iron). The National Electrical Code 2020 edition doesn't actually have a section dedicated to BESS. Instead, there is a more general section related to all types of energy storage systems (ESS) in Article 706. Batteries are covered in NEC Article 480. Some examples of ESS that aren't battery-based (courtesy of the Environmental Protection Agency): Gravity: Electrical energy is stored as gravitational potential energy by pushing materials (like weights or water) up to a specific elevation. Pumped hydroelectric is an example. Flywheel: Electrical energy is stored as kinetic energy on a flywheel. The flywheels is tapped into later to extract the kinetic energy. Thermal: Electrical energy is used to pump heat into a reservoir. A thermoelectric generator later uses this heat flux to produce energy. Compressed Air: Electrical energy is used to compress a gas at artificially high pressures. The gas is then released later to extract energy. All of these technologies have their own unique technical challenges. However, from the perspective of the power systems engineer, design around these components is relatively simple. All energy storage systems will come with ratings for input and output currents of their associated devices (e.g. inverters, converters, etc.). In any case, the standard NEC recommendation of sizing to 125% of the continuous load and 100% of the noncontinuous load applies. Likewise, standard overcurrent protection requirements apply. One unique requirement for energy storage systems relates to a piece of equipment referred to as a diversion charge controller. This particular piece of equipment charges an energy storage system by diverting excess power to a load. This type of system is required to have a load rated for 150% of the charger rating and conductors with ampacity of 150% of the rated current. Diversion Charge Controller Block Diagram. Red Arrows are the Flow of Power.

  • Critical and Creative Thinking in Engineering

    Too many times have I heard an engineer (or student) tell me, "I like engineering because things are black and white. You can always find the right answer". I couldn't disagree more. In a classroom, this type of approach is common. You have a problem that requires a particular mathematical solution. If you work the problem correctly, you find the answer. If you don't, then the answer is wrong. In theory, this idea extends to the real world. If somebody requires you to design an electrical service to carry power from the utility to their building, there can only be one right answer for the minimum conductor size, right? There are two main reasons why engineering problems aren't so simple, complexity and subjectivity: Complexity: The world is complicated. It is never possible to model everything that could take place in your design with full accuracy. This leaves room for judgment calls on how accurate is accurate enough. Subjectivity: There are competing interests. There are always tradeoffs between schedule, cost, and performance. These factors can be visualized with the following matrix. Creative thinking improves the subjectivity (value for the client) and critical thinking improves the complexity (accuracy of design). Engineers want every design to fall into the green box. When designs offer value to the client and are modeled accurately, everybody gets what they want. The project, whatever it may be, can be built knowing that people and property will be safe and that the plant will do what the client needs it to in the long-run. Engineers never want to be in the red box. These designs are insufficiently detailed and don't give the client a reasonable schedule, cost, and/or performance. A design in the blue box is modeled accurately but doesn't meet the needs of the client for value creation. These kinds of designs have the ability to become good designs with further innovation. Innovation is difficult, though, and external pressures may push engineers to cut corners by changing their design assumptions to yield more favorable results for cost, schedule, etc. The yellow box represents designs that look good to the client but have not considered all potential safety risks. These designs are essentially unfinished. Additional modeling could push the designs into the green or into the blue. External pressures, seeing that a favorable result has been achieved with preliminary modeling, may be inclined to push the design forward to a construction stage. Defining the lines between these regions is the difficult part. When is a design modeled accurately enough to feel confident? When are all parties satisfied with the design? In a world filled with imperfect information the answers aren't always clear. Here's a fictional example of how a design could flow through the various regions of subjectivity and complexity: A client goes to Engineering Firm 1 to request a conductor be sized. Engineering Firm 1 fails to request any further information and provides a design based on massive conductors. The conductors are rated to carry over 10x the current at standard conditions, are expensive to purchase, and are difficult to install. This design is entirely too conservative and has made no effort to model the situation. This design would fall in the red region of the matrix above. Disappointed with Engineering Firm 1, the client moves on to Engineering Firm 2. At Firm 2, the client requests additional information about the ambient temperature, the installation conditions, the user's needs, and more. This gives Firm 2 enough information to prepare a detailed design. The conductor size is reduced substantially, but the cost is still too high. Firm 2 has used critical thinking to bring the solution out of the red region and into the blue. A little bit closer to a viable solution, the client goes to another engineer, Firm 3. Firm 3 recommends routing the conductors along an alternative path that offers better ambient conditions (temperature, air flow, etc.). This allows the installation to be lower-cost and the conductor size to be decreased. The project now falls into the green region thanks to the creative thinking of Firm 3. At this point, the design is good to go!

  • Capacitors (NEC 460)

    General - Capacitors are commonly used as standalone devices in industrial, transmission, and distribution power systems to compensate for the inductive effects of loads and lines. This compensation is commonly referred to as "power factor correction", since the resulting power factor after inserting a capacitor bank into the system will be closer to unity than before. The black and white cylinders on this circuit board are small capacitors. Larger banks are made up of smaller cylindrical capacitors. Capacitors break the NEC's trend of requiring at least 125% of the continuous load and 100% of the noncontinuous load in conductor ampacity. Instead, conductors are required to carry 135% of the rated current. Why the difference? IEEE 18 actually defines standards for capacitor banks and requires that they be capable of loading to 135% of their rating. By sizing the branch conductors to the capacitor to 135% of the rating, we allow the capacitor to be fully utilized under overvoltage events. Additional Design Consideration - There's more to consider with capacitors than just the ampacity problem. Some other issues that occur when inserting a capacitor bank into a system: Harmonic resonance: An inductive system with a shunt or series capacitor bank will have a resonant frequency. If there are harmonic-generating devices nearby (like variable-frequency drives, motor soft starters, or inverters), the potential for distortion in power quality due to these harmonics must be considered. Transient overvoltage: When a capacitor is switched into service, there may be a rapid inrush of current and a transient "ringing" effect where the voltage oscillates beyond its steady-state condition, potentially to a very large value. The transient overvoltage problem can be mitigated with synchronous closing of capacitors and/or pre-insertion impedances used to reduce the initial current flowing into a capacitor. Steady-state voltage rise: Even if the transient overvoltage from a capacitor bank is resolved, this does not mean that the steady-state design conditions will be favorable. Capacitor banks can lead to voltage rise where they are installed because of the capacitances offset of the line inductance. Discharge: Capacitor banks store energy in the form of an electric field. The residual voltage from this electric field will stay in place for a long period of time without a proper discharging resistor to drain the energy. The capacitor's stored energy presents a hazard for people in and around the equipment. Additionally, the leftover voltage on the bank can cause a phenomenon known as "re-striking", where the voltage between the capacitor and the line feeding it becomes high enough to ionize the air and bridge the open circuit.

  • Why do we need transformers?

    Transformers are an essential part of AC power systems. It was transformers that brought about the AC revolution across the world, thanks to their ability to efficiently convert power across voltage levels. Transformers are important because of losses. Electrical conductors aren’t perfect, which means that they have resistance. This resistance in the conductor leads to real power losses as the power is transported. In other words, the power you get into a cable isn’t the power you get out. For a three-phase power line, the losses are given by: d = 3 L r I^2 Where: d is the real power lost in the three-phase power line L is the length of the circuit (length of one conductor) I is the magnitude of the current flowing through the lines r is the resistance per unit length of the line The resistance per unit length of the conductor r is inversely related to how much metal is in the conductor. In other words, a larger conductor will have a lower resistance than a smaller conductor. Of course, more material means more cost, so larger conductors are more expensive than smaller ones, all else equal. Therefore, if we want to build the most economical system for transferring electrical power with low losses, we need larger conductors. The amount of power flowing through the same three-phase lines described above is given by: P = √(3) I V cos(θ) Where: P is the power flowing through the conductors I is the magnitude of the current flowing through the lines V is the voltage between the lines of the conductors cos(θ) is the power factor of the circuit What’s important from this relationship is that the power flowing through the lines is proportional to the current and voltage. In other words, we can transfer the same amount of power with a high current at a low voltage or a high voltage with a low current. Since the losses in a line are proportional to the current squared (as stated above), the preferred choice would be to transmit power at as high of a voltage as practical. This is why a transformer is so important. By increasing the voltage we can transfer power over long distances with negligible losses. Why not just use high voltages for everything? Then you wouldn't need transformers. High voltages aren’t without their problems. Voltage is a way of understanding the electric field in a circuit, a force field that acts on charges. Higher voltages mean that things need to be spaced out to make sure the electric field doesn’t get too strong. If the electric field becomes large enough, it can actually ionize materials and create arcing faults with dangerous consequences. To get around this problem, we space things out and/or use additional electrical insulation. Transmission lines have conductors spaced far apart to prevent these problems since there is plenty of room overhead. Within a residential or commercial space, devices that take up large spaces for high voltage would be incredibly impractical. Moreover, the savings in materials would be very low since conductor sizes for things like outlets or lights in your home are already very small. What about DC? Transformers are AC devices. They rely on Faraday's Law, a physical phenomenon by which an alternating magnetic field (as produced by an AC current) induces voltage. With a DC power source, the magnetic field produced is constant, and, as a result, no voltage can be induced into the secondary of the transformer. Nowadays, there's more to the story though. DC power systems can have their voltage altered just like AC systems using devices known as converters. Converters use semiconducting electronic devices like transistors to achieve this conversion. For a long time, electronic devices were too expensive and couldn't handle power at the levels required to make DC transmission viable. Now, that has all changed thanks to loads of innovation. DC transmission has its perks too, which is why we're starting to see more and more of it around the world. In summary, we need transformers to get power from point A to point B on an AC system efficiently. Transformers make the power grid economical and are essential for AC power transmission and distribution. High Voltage Transmission Lines with Conductors Spread Out to Prevent Ionization

  • Do Electrical Engineers need to know CAD?

    CAD stands for "computer-aided design". Taken literally, that could mean a ton of different things, since engineers use computers for design all the time. However, CAD almost universally refers to using computers to do spatially involved work with software programs like Autodesk AutoCAD or similar. CAD is all but necessary for some engineering disciplines, like civil engineers (think grading plans, hydrology, etc.). For electrical engineers, the necessity of CAD programs isn't so obvious. My short answer to the title of this article is, "No. Electrical engineers don't need to know how to use CAD." A slightly better answer would be, "Electrical power systems engineers don't need to know how to use CAD at an advanced level." Electrical engineers in power systems can get by in a career without using CAD, but it will limit them. There are plenty of computer design software options out there with different applications, and nobody can be expected to know them all. That said, understanding the software that is important for your company and your particular projects is essential. For those working in the residential field, CAD programs focused on architectural elements like receptacle placement, service entrances, etc. may be the most important. For those working on the design of an industrial facility, 3D modeling software may be important to route duct banks, aboveground conduit and tray segments and more. At a minimum, if you cannot utilize these programs to move around the model and measure things, your ability to design an electrical system to the highest quality will be impaired (or at least make your teammates do a lot of work for you). Although electrical engineers may be less constrained by strict dimensional requirements, there are still some pretty important problems that are impacted by spacing. Here are some basic and common examples: Voltage Drop - If a conductor's length is inaccurately measured (including any slacks), then the voltage drop will also be inaccurate. For circuits over long runs serving loads like motors, this could mean the difference between a design that works and one that doesn't. Underground Heating - If conductors are installed underground in proximity to one another, there will be mutual heating. This will reduce ampacities below tabulated values for a conductor by itself. If we want to model ampacities accurately using Neher-McGrath software (like ETAP or similar), then we need to know all the relevant dimensions between conductors, conduits, duct banks, external heat sources, and more. All of this is essential for accurate modeling. Access - This one's an easy one, but essential for a good design. The National Electrical Code (NEC), along with a variety of other standards, requires spacing around equipment to ensure a safe workspace. Non-electrical folk often neglect this issue and place equipment in locations that is not suitable for safety or operations & maintenance. Beyond a basic knowledge of navigating CAD software, electrical engineers also need to understand what is and is not easily do-able in CAD software. Sometimes, things that seem simple to complete are much harder than one would expect given the program's limitations. Other times, things that sound daunting if they were to be completed manually can be done in a fraction of the time with CAD software. It's never a good look to have wasted all day doing something that you could have completed in 10 minutes, had you just asked for help. My recommendation is that every engineer learn how to use CAD, even if just for the basics. It never hurts to pick up a new skill, and the workforce is always changing. Shifts in the market for electrical engineers from fossil fuels to renewables has changed the expectations of what electrical engineers do, and software knowledge is part of that. Plenty of free courses exist online, and you may find yourself surprised at just how easy it is to use a program like AutoCAD once you get in and start playing around!

bottom of page