Search Results
62 items found for ""
- Rethinking X/R as Time Constants
X/R ratios are a critical parameter in power system design. In short, the X/R ratio is the ratio of the system reactance (X) to the system resistance (R), when viewed from a particular point in the network. X/R ratios are crucial for short circuit studies. Traditional breaker ratings are based on symmetrical three-phase fault currents with an assumed maximum X/R ratio. When the system's X/R ratio exceeds the tested value of the overcurrent device, the circuit breaker will need to have its interrupting rating derated, normally to match the design total asymmetrical fault current (RMS). Why is this the case? Circuit breakers normally rely on zero crossings during the short circuit waveform to be able to interrupt the circuit and clear the fault. When the X/R ratio is higher, there is more inductance (reactance) in the system. This prolongs the fault duration and will delay the occurrence of the first zero crossing. The worst-case DC fault current can be written as: Where: I DC is the DC Fault Current I AC is the AC Symmetrical Fault Current f is the system frequency X/R is the X/R ratio t is the time after the fault Essentially, the DC fault current decays exponentially. When X/R is higher , the factor in the exponential gets lower . This means bigger X/R equals longer DC offsets. Figure 1: A Short Circuit has Two Components: The AC (Symmetrical Fault Current) and the DC Offset. Higher X/R ratios lead to longer decay times for the DC offset. Thinking directly in terms of X/R ratios can be difficult. With enough experience, engineers will become comfortable identifying values that are "high" or "low". However, the meanings behind these X/R ratios will likely remain nebulous. I recommend rethinking X/R in terms of time constants. Short circuits in power systems are just like an RL circuit-that's why you see the exponential decay in the DC fault current above. We can define a time constant: Where L is the system inductance. This time constant describes the time it take for the DC component to decay from its maximum value to 1/e (approximately 36.7%). After 3 time constant, less than 5% of the DC offset will remain. After 5 time constants, less than 1% of the DC offset will remain. For this reason, the time constant is a very intuitive and natural way of taking inductance and resistance and converting it to something meaningful for decay of the DC offset. By recognizing that inductance L is directly proportional to reactance, we can write the time constant directly in terms of X/R. This is a convenient way of viewing things. Now, for typical X/R in the range of 5-60 with a 60 Hertz frequency, we can relate it directly to a time constant: X/R Ratio Time Constant (Cycles) 5 .8 10 1.6 20 3.2 30 4.8 40 6.4 50 8.0 60 9.5 The table above demonstrates how DC offset become critical to consider as we move upstream in the power system. An X/R ratio of 5-10 is normal for low voltage power systems, where resistive effects are significant in transformers and cables. In these networks, we can expect that the DC fault current will have a time constant of no more than 2 cycles. That's not bad at at all for clearing the fault and maintaining equipment ratings. Large power transformers, on the other hand, can have X/R ratios on the order of 50-60. In these cases, the time constant could be up to 10 cycles. Failing to account for DC offset here could lead to major problems with clearing faults. Understanding X/R as time constants will give you the power to put real, physical behavior to your networks. Instead of just knowing that the X/R is "high" or "low", we can say exactly how big this impact will be.
- Differential Protection (87)
Introduction - Overcurrent protection (50/51) is the golden standard of device protection. It's used everywhere, mirroring the behavior of molded case breakers and fuses. Sometimes, though, this type of protection can't quite get the job done. This is where differential protection comes into play. Overcurrent protection curves suffer from a couple of problems. First, 50/51 curves often need to be "coordinated" to avoid nuisance tripping off more of the system than is necessary. This coordination can add significant time delays upstream in the distribution network. These time delays can lead to unacceptable time durations for fault clearing, whether because it leads to arc flash hazards or infringes on equipment damage curves. Second, Time overcurrent protection is inherently unable to tell where the fault is occurring. This means a sufficiently large current can send a signal to all the system relays that they need to trip (or at least start their countdown to tripping). Because of this, 51 protection relies on a time delay to only trip if the condition persists. This could lead to unnecessary damage to the system. Even thought the official damage curve may not be impacted, that doesn't mean that a fault piece of equipment won't still experience damage (since it's operating out of its normal conditions). The faster we could trip that off, the better. Differential protection solves both of these problems by approaching the fundamental question of protection differently: Instead of asking, "Is this current too large?", differential protection asks "Does the current in match the current out?" Figure 1: Differential Protection in a Nutshell. Faults inside of a Protected Area are Detected by Noticing that the Input and Output Currents are NOT the Same When everything is going normally, the current into the protected area (e.g. a transmission line, switchgear lineup, transformer) should match the current leaving that same area (after accounting for any turns ratio corrections in the case of a transformer). Even when there's a fault outside of the protected region, everything going in is equal to everything going out. However, when there's a fault in the monitored area, the current going in will NOT be the same as the current going out. Figure 2: The Zone of Protection for a Bus Protected by Differential This change in protection philosophy allows the user to trip on a fault condition much faster and with greater selectivity, only taking out the effective area with potentially an instantaneous trip setting. Implementing Differential - Practical implementations of differential protection are a bit more complicated that simple overcurrent. Differential requires the use of multiple current transformers (CT), potentially with substantially different ratings. These CTs define a "zone of protection" where a fault can be detected. Bigger zones trip off larger areas, so they're safer but less selective. In order to perform an assessment of whether "current in = current out", we need to normalize the values we obtain from each CT. At a high level, this just means accounting for the CT ratio and the transformer turns ratio (if differential is being used for a bus, this doesn't apply). For example, a transformer may have a turns ratio of 10, so 100A on the primary corresponds to 1000A on the secondary. We would only trip if there was a difference in normalized versions of these values. During regular operation, we would expect to see a difference of 900A here! 87 Protection Pickup - So now we know how to assess the in/out currents and we know that an imbalance in these inputs equates to a trip, but what is the pickup? How much imbalance is acceptable? First, we need to define exactly what we are measuring to pick up. Differential protection is about measuring the difference between inputs and outputs. Mathematically I d = I in - I out / (I in + I out ) Where: I d is the difference current, the value the relay operates on when high enough, measured in % I in is the input current to the protected area," I out is the output current from the protected area, At first glance, we may think that any difference in input to output current should cause a trip. However, this just isn't a realistic design. Unfortunately, current transformers and relays have limitations on accuracies. Standard C-class CT's are only accurate to +\-3% at rated current. Relays may have an error of ~1% as well. This means, even if there are no other contributing sources of error, a relay could see an error of: 3% * 2 + 1% * 2 = 8% Setting below a difference current of 8% could lead to nuisance trips in this case. Where differential protection is used for transformer protection, there are additional sources of error to consider: tap changes, excitation, etc. Types of Differential Protection Relays - A detailed discussion is beyond the scope of this article, but it's important to understand that differential protection relays generally come in two forms: High Impedance and Low Impedance. High impedance differential is generally used for bus protection while low impedance differential is normally used for transformer protection. Do I Need Differential Protection? - The question of whether or not differential protection is necessary will likely be driven by a project's contract. The National Electrical Code does not require the use of differential protection. NFPA 70E only requires mitigating measures as required by a risk assessment plan, which leaves a ton of room for engineering judgment. Contracts may require differential protection for buses, like switchgear or motor control centers, to support a lower arc flash hazard level (by tripping faster). Even more typical is the use of differential protection around large transformers. Big transformers are high cost items that can be excessively damaged by prolonged internal faults. Time overcurrent may NEVER detect internal faults of low magnitude. Only when those faults develop into a serious hazard will the time overcurrent finally pick up. If contractual requirements are not in place, then nothing explicitly requires 87 protection. A system can be designed for compliance with the NEC purely with time overcurrent. However, there may be loss of selectivity in some cases (tripping off more if the system than intended during a fault) and there may be excessive arc flash hazard levels (that prevent energized work or require very heavy-duty PPE).
- Cable Termination Temperatures
Introduction - Cable ampacity (how many amps a particular cable type can carry) is one of the most important design calculations. Oversizing cables leads to excessive cost, but undersizing cables can lead to overheating and potentially dangerous failures. The National Electrical Code does an excellent job describing how ampacities of cables (and consequently cable sizes) can be determined in the middle of a circuit. The process, at a high level, is as follows: Figure out the temperature rating of the cable, its maximum allowable steady-state operating temperature (e.g. 90 Celsius) Figure out the routing conditions of the circuit (e.g. in isolated conduit in air, directly buried, etc.) and select the appropriate ampacity table Use the table's column corresponding to the temperature rating, and apply derating factors for temperature, bundling, etc. The middle of the circuit's ampacity isn't too difficult to figure our, even if there is some engineering judgment to be applied. What is difficult, however, is figuring out the right way to size a cable to account for temperature limits at the point of termination (e.g. connection to a circuit breaker, switchgear bus bar, etc.). The NEC in Article 110 provides prescriptive, although very confusing, guidance. Essentially, standard values from NEC 310.16 are to be used for ampacity at an LV termination point. For MV, the code allows engineers to use the 'applicable' table. Once again, implementing these rules isn't difficult but making sense of them is! Low Voltage Terminations - First off, we need to understand that the LV requirement is based on UL listing criteria. Equipment is tested with cables of a minimum size that correspond to the sizes in 310.16. Theoretically we don't need to do any temperature or bundling derating of these cable sizes because equipment shouldn't be used outside of its maximum temperature range used during testing and cables aren't bundled at their point of termination. Of course, nothing is really so simple. Consider the following: What if we had a set of cables with extreme derating just before entering an equipment enclosure, like a giant bundle? Are we sure that the minimum cable size prescribed by the NEC will be suitable in this case? Let's take a closer look. Figure 1: A Big Bundle of Wires Consider this example: 15 current-carrying XHHW-2 conductors are bundled together in a single conduit. The circuits each must carry 40A. The conductors terminate at a panel with terminations rated for 75 Celsius. What size of conductor is allowed based on the NEC? Per the NEC we have to determine a cable size that is suitable for 40A at 75 Celsius for terminations (8 AWG CU per NEC 310.16). Then, we compare this with the conditions of use. Derating for 15 current-carrying conductors is a factor of 0.5 and the 90 Celsius ampacity is allowable. This corresponds to a cable size of 4 AWG CU. Since 4 AWG CU is the larger size, we're required to use that. Figure 2: Example of Two Different Temperatures Leading to Conflicting Requirements This example shown in Figure 2 illustrates the confusing part of the NEC: Why are we allowed to assume that the temperature of those cables at the point of termination will be so much less? Hypothetically, cables could be operating at 90 Celsius only a few inches before being lugged onto a 75 Celsius piece of equipment. In the Canadian Electrical Code (CEC), this problem is handled in a much more physical way: Conductors must be sized to comply with their termination temperature limit based on the worst-case conditions within 5' of the point of termination. This means that the cables in the example above would have been sized for a 75 Celsius temperature even though XHHW-2 was used (because the conductors have no transition period without bundling before terminating). The downside of the CEC method is that it will lead to larger cable sizes. The upside is that the odds of overheating equipment is much lower. Needless to say, using the CEC method in NEC territory can lead to some heated arguments over value engineering. I recommend cable sizing based on CEC methods for terminations, but ultimately this is something the responsible engineer for the project must assess. Medium Voltage Terminations - NEC guidance on MV terminations is much less particular than low voltage. The requirements of Article 110 essentially say 'use engineering judgment for the right table'. This lax methodology leaves more to interpretation by the responsible engineer, which is of course a double-edged sword. The positive is that this allows for an engineer to make an educated assessment about the conditions of the cable as it approaches the point if termination. The downside is just that-there's no real direction. In general, I recommend the same approaches used for LV terminations be used for MV (essentially derating for the worst-case conditions within 5' of the termination point). Conclusion - Cable sizing is not nearly as simple as people often make it out to be. Everything is fun and games until you start to see things overheating and damaging equipment (or worse, catching fire). The NEC often has a reputation as a "conservative" document, meaning it should always provide exceptionally safe designs. However, I have found this is often not the case, and perhaps there is no situation where this is more present than terminations. Always make sure you understand what's going on at the ends of the cables before you size and install!
- Time Overcurrent (51) Protection Considerations
Basics - Time overcurrent protection, abbreviated with ANSI device number 51, is THE relaying and protection scheme. What I mean is: If we (as a society) had to choose just one way to protect our equipment, 51 protection would be the answer. It essentially mimics the behavior of fuses (the very first form of protection used in power systems). If I have a small overload the overcurrent device takes a long time to trip. If I have a short circuit then the device trips very quickly. This behavior is really nice, since it ensures equipment isn't damaged by overcurrent, while also keeping our system in service as much as possible! Figure 1: A Schematic Representation of 51 Protection Medium voltage switchgear, like the example in Figure 2 below, often use time overcurrent protection to ensure that feeder circuits are protected. 51 is a reliable protection for things like transformers, loads, and buses, so you'll see it show up everywhere. Figure 2: Medium Voltage Switchgear (Photo Courtesy Eaton) Pickup - In order to specify the protection of our circuits and equipment, we need to set a "pickup" value for our 51 protection. Essentially, this is the value at which the time overcurrent curve actually begins to take effect. Below the pickup value, the system can operate continuously with no fear of tripping. How do we set the pickup value, though? Well, there are a couple of things to consider. The pickup needs to be set above the load current of the circuit. The pickup needs to be high enough that we avoid nuisance tripping offline. The pickup value should be coordinated with the ampacity/current ratings of the equipment being fed. We can meet this design as follows: To avoid nuisance tripping, our 51 pickup needs to be at least as large as our full load current with any possible measurement errors from our CTs and relays accounted for. Generally, a little bit of margin above this limit is advised as well to ensure satisfactory operation. In practice, something like 10% is recommended for the combined margin and error with modern equipment. For low voltage fuses and molded case circuit breakers 25% margin is the standard set by the NEC. Curves - The long time pickup setting is only one part of 51 protection. We also have to define the equation for the curve. In theory, we can define any kind of curve. In reality, the IEC and IEEE define standard curves that are used almost universally for relay settings. In the United States, these curves have designation like U1, U2, U3, or U4 that correspond to the level of "inverse-ness" in the graph (how quickly the relay trips on overload). Figure 3 below shows the Standard US Trip Curves (U1-U5) and their associated pickup time vs. pickup current. The pickup current is expressed as multiples of the long-time setting. For reference, U4 has the following trip times: ~16 cycles at 5x pickup ~6 cycles at 10x pickup ~3 cycles at 20x pickup Figure 3: Standard US Trip Curves Properly setting time overcurrent protection for coordination amongst devices requires taking a close look at all the curves involved and modifying any settings as needed. Protection is necessary and coordination is desired. This means that selective coordination of breakers (making sure the right breaker, and only the right breaker, trips) is of secondary importance to protecting equipment and personnel. Achieving selective coordination means making sure that there is sufficient time delay between curves at the same current value (assuming the same accuracies for CTs and relays). For example, consider the following typical delays: MV Circuit Breakers: 3-5 cycles Relay Pickup Time: 1-3 cycles Design Margin: 2-3 cycles This adds up to a total operating time delay of roughly 6-11 cycles. Without at least this much delay, we can't guarantee that the right breaker alone trips. Comparison with Instantaneous Overcurrent Protection - Time overcurrent protection (51) is often supplemented with instantaneous overcurrent protection (50). 50 protection relies on setting a value of current that results in immediate circuit breaker operation. Generally, we need to set our 50 protection to be above any transient currents (like motor inrush, transformer energization, or through-faults that will be cleared by other breakers). Otherwise, there could be nuisance tripping with undesirable results. 50 protection can be combined with 51 to form a single trip curve, an example of which is seen in Figure 4. Figure 4: A Combined Trip Curve with 51 and 50 Protections 50 protection is incredibly valuable when applied correctly. Adding instantaneous overcurrent to your trip curve may help with coordination. Perhaps most interestingly, with the use of "maintenance switches" plant personnel can temporarily adjust 50 settings downward so instantaneous trip occurs on very small overcurrents. This can help reduce arc flash hazards for maintenance personnel. Figure 5: Maintenance Switches on a Switchgear (Photo Courtesy Eaton) Summary - Overcurrent protection is what keeps our power systems working. Electrical engineers, designers, electricians, maintenance personnel, and operators all need to understand how 50 and 51 protection work to keep a plant running successfully. So, next time you're planning a facility, pay extra special attention to those functions!
- Substation Design Considerations
Intro - Substations are electrical power systems that convert from one voltage to another and distribute power out. The term substation can refer to medium voltage and low voltage systems (often with the prefix "secondary"), but, in general, the term is reserved for high voltage applications installed outdoor, like what you see in Figure 1. Figure 1: A Transformer in an Outdoor Substation Parts of a Substation - Substations can have all kinds of components, just like any other electrical power system. However, we usually break down substations into a few key elements: Main Transformer: The main transformer is the centerpiece of a substation. Substations can actually have several of these devices working in parallel. The main transformer steps down voltage in a distribution substation to convert high transmission voltages to lower distribution voltages. In a generating facility the main transformer does the reverse-the voltage is stepped up from a medium voltage level to a high transmission voltage to bring power to the end users. Circuit Breakers: Circuit breakers are the second major piece of any substation. Breakers (along with their associated relays) provide protection against a variety of adverse conditions, including short circuits. Disconnect Switches: Disconnect switches are essential for maintenance of a substation. They provide the ability to isolate pieces of equipment, including circuit breakers, when work needs to be performed. It's common to see disconnect switches on both sides of all major equipment (e.g. breakers and transformers) in a substation. Surge Arresters: Surge arresters are used to provide protection against both lighting and switching surge conditions. These devices are often placed around major equipment like transformers and adjacent to any incoming overhead lines. While the substation itself will likely be protected from lightning strikes, the potential for damaging surges originating form incoming lines is very real. Surge arresters don't do anything during normal plant operation, but they play an important role in protecting equipment. Instrument Transformers: Instrument transformers are used to sense voltage or current in the substation and convey this information to the relaying and protection system. Instrument transformers are also frequently used in substations to provide metering information. Metering-class instrument transformers have a greater level of accuracy than the devices used for relaying and protection, but metering-class devices have a more limited range of measurement. Bus (Rigid or Flexible): Rigid bus is the primary current-carrying component used in substations. It's essentially a solid piece of metal that travels between components. Rigid bus is normally either rectangular or tubular, can be hollow or solid, and can be made of aluminum or copper. In situations where rigid bus doesn't work (e.g. because of significant seismic concerns), flexible bus can be used instead. Stranded Conductors: Stranded conductors (or jumper conductors) are normally used in substations to connect bus to equipment. The flexibility of stranded conductors makes them worse at maintaining required clearances than bus. However, this same flexibility means that there is more margin in making terminations. Figure 2: Annotated Substation Components (Source: OSHA) Design Considerations - Designing a high voltage substation requires careful consideration of details that are not present in a normal medium or low voltage power system. First, substations are usually located outdoor with bare conductors and buses. Clearances have to be maintained between conductors and from conductors to ground. These clearances are driven by a number of factors and detailed calculations of the exact required distances can be a bit of a chore (See IEEE 1247 if you're interested). Fortunately two standards exist that simplify clearance requirements to simple tables: NEMA SG-6 and ANSI C37.32. Both standards are fairly short and outline the values required between metal parts and ground. In summary, clearances become large with higher BIL (Basic Lightning Insulation Level) requirements and at higher elevations. In reality, things like clearances are usually mandated by utilities and grid operators. These requirements supersede any other standards. Figure 3 is a satellite image of a substation from Orlando, Florida. Notice how the righthand side of the image has spacings that are much wider than the lefthand side. The transformers in the middle convert power from higher voltage on the right to lower voltage on the left. This reduction in voltage means energized metal parts can be placed closer together. Figure 3: A Satellite Image of a Substation in Orlando, Florida. Notice the differences in Spacing between the High and Low Sides of the Transformers. Next, substations need to be designed with appropriate reliability. Will there be a single main transformer or multiple? Will tie breakers be required to feed additional buses when the normal transformer is out of service? Figure 4, below shows how a substation can be designed with or without added reliability considerations. In both cases, a high voltage input is dropped down to medium voltage. The redundant substation makes use of multiple transformers and circuit breakers to prevent single points of failure and allow for contingency operation when failures do occur. The non-redundant design is likely a lower cost to construct, but may not meet plant reliability requirements. Figure 4: Reliable vs. Unreliable Substation Designs The third major design consideration for substations is grounding. IEEE 80 and 81 outline the requirements for establishing an electrically safe work condition for both touch and step potentials. These standards go beyond the National Electrical Code and with good reason. Substations often have large fault currents from remote sources. When a line-to-ground or double-line-to-ground fault occurs from an incoming line, extremely hazardous voltages could develop without a proper ground grid in place. These ground grids are generally designed with something like 1/0 AWG to 4/0 AWG bare copper conductors placed in a grid. Usually, this grid possesses even spacing to help with constructability. Uneven spacings, like the red grid in Figure 5 below, may help with step and touch potentials but are more difficult to build. Figure 5: A Constructible Ground Grid (Even Spacing) and a Non-Constructible Ground Grid (Uneven Spacing) Fourth, we have to think about lightning protection. Substations have many exposed conductors, leading to a high probability of lightning strikes. A proper substation design will include the necessary shielding to minimize the risk of strikes, often through the use of a lightning mast. A lightning mast is just a large grounded structure that provides a preferred path for lightning current, away from valuable equipment. NFPA 780 is an excellent resource and a typical governing code for lightning protection design. The 150' radius "rolling sphere" method can be used to find the protected area from a lightning mast. Figure 6: A Lightning Mast and its Zone of Protection Under the Rolling Sphere Surge arresters are also important. Surge arresters need to be installed around major equipment and adjacent to incoming overhead lines to prevent lightning strikes and switching surges causing overvoltages. Lastly, when designing a substation it's imperative to coordinate with other engineering disciplines, field personnel, and operations & maintenance teams. Just because something works electrically doesn't mean that it will work in practice. Foundations for substation equipment may be impractical, civil design considerations may limit the buildable area and require the selection of more compact equipment types, or the type of connections being used (say welded connections instead of bolted) may be at odds with labor capabilities. Summary - Designing substations comes with its own unique set of challenges. Outdoor and exposed equipment leads to requirements on clearances and lightning protection. Remote sources at high voltage lead to more detailed grounding requirements. However, at the end of the day, the problem isn't all that different from a low voltage or medium voltage system. Short circuit, load flow, ampacity, and other standard power engineering problems still need to be addressed. Every substation is different-each design needs to be approached individually and with a high level of care.
- Transformer Voltage Drop
Intro - In a perfect world, transformers convert from one voltage level to another without any impact on the system. Unfortunately, this isn't the way things actually work. In a real transformer, there will be a voltage drop across the transformer. At first, this may seem confusing. For instance, if you step up in voltage from 13.8kV to 345kV, how can there be a voltage drop? The answer is that the drop is in per unit. If your input voltage to your transformer is 13.8kV (100% of nominal), then the output voltage may only be 341.6kV (99% of nominal). Even though you have increased your voltage, you have still lost voltage on a per unit basis. Transformer Impedance - This voltage drop across a transformer is generally driven by the transformer's impedance. The impedance of any real-world transformer is provided as a percentage referenced to the transformer nameplate power and voltages. The impedance is usually dominated by a reactive (inductive) component. This makes sense since transformers are magnetic devices relying on magnetic flux linkages to do their job. The impedance of the transformer usually falls within a standard range of values that correspond to the transformer's power rating. Very large transformers (e.g. 50+ MVA) generally have an impedance on the order of 10% at their base nameplate rating. Smaller transformers may be somewhere from 3-6%. Transformer impedances aren't all bad-they play an important role in limiting fault currents. At the secondary of a transformer, fault currents are primarily driven down by the transformer impedance. If transformers had NO impedance, there could be outrageously large short circuit currents in your system. Figure 1: A Large Transformer in a Substation-Detailed Nameplate Information is Likely Visible on the Transformer Exterior The impedance of the transformer can be converted from a nameplate percentage value to an Ohmic value using the following relationship: Where: Z_Ohms is the ohmic impedance value Z_% is the percent impedance of the transformer (nameplate) V_Line is the line-to-line voltage of the transformer on one of its windings, usually the secondary S_T is the 3-Phase apparent power rating of the transformer Notice that V_Line in the equation above can be referenced to either the primary or secondary winding voltage. Results can be converted back and forth but making use of the secondary winding is a more physically meaningful approach. When X/R ratio is available, this impedance can be broken down into a resistance R and a reactance X for more accurate results. The new effective impedance, based on information from NEC Chapter 9 Table 9 is: Where: R is the transformer's resistance in Ohms X is the transformer's reactance in Ohms Θ is the power factor angle of the load Note that the value of Θ is somewhat of an assumption. The load power factor is the standard assumed value. However, the power factor at the transformer primary may be worse than the load power factor due to resistive and reactive losses. If input power factor information is available, it is recommended to consider the drop using this information as well. Calculating Transformer Voltage Drop - We can use the effective impedance of the transformer to calculate a voltage drop across it. The voltage drop on the secondary of the transformer is then: Where I is the load current magnitude on the transformer secondary and V_D is the voltage drop in terms of volts on the transformer secondary. If the voltage drop needs to be normalized (put into %), the results should be divided by the transformer secondary voltage. When the X/R ratio is not known for the transformer, the impedance Z_Ohms can be used in place of the effective impedance (Rcos(Θ)+Xsin(Θ)). This approach may be necessary in some cases to get an understanding of the upper bound for transformer voltage drop. However, the impact of power factor of the load on transformer voltage drop is generally nontrivial and can lead to significantly lower results. Additional Considerations - The results in the above section are based on a transformer operating at its nominal turns ratio. This means that the transformer is being used exactly at the voltage transformation ratio it is stated to perform (e.g. 13.8kV to 480V). In many cases, transformers have taps, the ability to alter a winding to improve voltage regulation during heavy loading. The effect of a winding tap change is approximately equal to an additional % increase or decrease on the resultant % voltage drop. For example, if a transformer produces a voltage drop of 3% on its secondary, but the secondary winding is tapped up 2.5%, the realized voltage drop would be ~0.5%. An additional consideration for voltage drop calculations is transformer impedance tolerances. Per IEEE standards, two-winding transformers have a tolerance of 7.5% and three-winding transformers have a tolerance of 10% on their impedances during design. This means that preliminary models should account for this impact when performing voltage drop (e.g. impedance is 7.5% or 10% higher than what the nameplate shows). When tested information is available from the as-built transformer, this tolerance is not necessary. The model presented in this article is only an approximation. Detailed load flow calculations using iterative numerical methods can provide more accurate results, usually achieved using commercial power systems software like ETAP, EasyPower, or similar. Example: A transformer serves a load of 2MW at 480V and at a power factor of .85. The transformer has the following nameplate information: Apparent Power Rating: 3.33 MVA Impedance: 6% @ 3.33 MVA X/R Ratio: 20 What is the voltage drop caused by the transformer? Assume no taps are present and that the impedance has no tolerance. Solution: First, determine the impedance of the system in Ohms: Z_Ohms = 6% * (480 V^2 / 3.33 MVA) = .00415 Ohms Next, use the X/R ratio of the system to determine the resistive and reactive components of the transformer impedance. The results are determined using the Power Triangle: R = .000207 Ohms X = .00414 Ohms Third, we need to determine the load current, based on the power factor, voltage, and power required: I = 2 MW / .85 / 480 V / √(3) = 2830 A Lastly, determine the voltage drop using the load power factor: V_D = √(3) * 2830 A * (.000207*.85 + .00414* sin( acos(.85) ) ) = 11.6 V Or, normalizing into percent, we get the following: V_D% = 11.6 V / 480 V = 2.42%
- Generators (NEC 445)
Basics - Generators have been the backbone of the electrical grid for as long as the grid has been around. Other than solar energy, all dominant forms of power productions rely on the use of generators powered by turbines (hydro, nuclear, coal, gas, oil, wind). The turbine converts some form of energy into mechanical energy and the generator converts that mechanical energy into electrical energy. The concept remains the same regardless of the original power source: a rotor (the part that moves) is rotated against a stator (the part that doesn't move). The interaction between the magnetic fields of the stator and rotor induces a voltage by Faraday's Law. The stator and rotor can consist of permanent magnets or windings. If coil windings are used for both the rotor and stator, then a separate source is required to excite them (at least initially). Notice that the definition of the rotor and stator doesn't actually tell you which part of the generator the windings are attached to. You might think that the stator would always be the output source, but brushes and other mechanisms can be used to solve the issue of moving wires. The armature is the winding that has the load connected to it (and gets a voltage induced into it). The field is the winding that produces a magnetic field. From a practical perspective, generators are required to be rated for a variety of key parameters, including: Impedances Output voltage Output current Output power (real or apparent) Power factor capabilities Frequency Number of Phases Operating temperature limits and characteristics All of this information is used to design power systems fed from generators. Generator Modeling - How do generators really run though? Does it behave like a voltage source? A current source? Or something else? Figure 1: What does a generator do? The answer is that generators behave as we control them! Generators are usually controlled in one of a few main ways. In each case, the set point for real power is considered to be constant (i.e. we are always trying to produce a certain amount of real power as required by the load). Two of the most common ways are explained here: Voltage Control: In this mode, generators are designed to keep their voltage magnitude constant while allowing their reactive power to vary. The generator will absorb VARs when the grid voltage is higher than the generator terminal voltage. The generator will deliver VARs when the grid voltage is lower than the generator terminal voltage. In a typical synchronous generator, we can control the voltage by altering the field strength through a system known as "excitation". A stronger magnetic field will create a larger induced voltage in the stator. Q Control: In this mode, generators are designed to keep their reactive power constant. Instead of allowing Q to vary, the generator lets the voltage vary. In effect, this is the opposite of voltage control. We still hold the real power constant, but we are trading off voltage for VARs. Figure 2: Generator Control Schemes in a Nutshell By this point it may already be clear, but we can't simultaneously control voltage, reactive power, and real power. If we try to fix all of those points, it's like having a system of equations with 3 equations and 2 unknowns. We've over-constrained the problem. In real world language, it means that we can only set P, Q, and V if we are designing for one super-specific case. Any deviation in the grid at all, any change in loads, etc., and the whole thing falls apart. If you tried to design the generator to do all of these things then the control system simply wouldn't work. Which one gets used and when? Ultimately, generators that are connected to the grid get their mode of operation determined by the grid operator. The grid's impedance and the local network needs (as seen by the local utility or whoever the grid operator is) will be the deciding factors. However the generator is operated, the generator needs to be run safely. For example, if a generator is run in Q control mode and the P and Q are so large that they lead to a voltage below the minimum operating voltage, then the setpoints have to be changed. Otherwise, you risk damaging the generator or worse. Generator Output Conductors - For small generators, like those governed by the NEC, the requirement for output conductor sizing is that they be capable of continuously carrying 115% of the generator output current unless the generator has appropriate overload protection (in which case only 100% loading is required). Easy enough, and for many residential applications, the conductors will still end up being sized to 125% for consistency with other branch circuits and feeders. Big generators are a bit more complicated. Often, generators don't have typical insulated cable coming out of them. Generators may produce so much current that using insulated wire and cable isn't cost effective. Instead, systems like isolated-phase bus duct (IPBD) and non-segregated phase bus duct (NSBD) are used. These are bus bars that, as their names suggest, are either isolated or not. They're a specialty kind of system made by specialty manufacturers and governed by IEEE standard for metal-enclosed bus (C37.23). Bus ducts are ordered with specific ampere ratings based on ambient temperatures. Testing is required of these assemblies and engineers don't size them like typical wire and cable. In any case, the output conductors from a generator need to be able to safely carry the expected output current at the expected worst-case temperature, irradiance, etc. Example - Solve for the current magnitude I in the single-phase circuit below. Also, solve for the reactive power Q produced by the generator. The generator is operating in voltage-control mode, meaning that the output voltage of the generator will be held constant with a magnitude V. The output power is also held constant with a value P. First, solve for the current using Ohm's Law: I = V / (R + jX) The magnitude of this expression is: I = V / √(R^2 + X^2) With the current magnitude known, we can solve for the reactive power Q. By conservation of complex power, the reactive power drawn by the load must be equal to that produced by the generator. The load draws a reactive power given by: Q = X I^2 Which is necessarily the same as the generator.
- Breaker Failure Protection
Introduction - Suppose you've got a power distribution system with some loads fed from a source. But then, OH NO!! A fault takes place on your downstream system near one of your loads? In a well-designed system, the breaker nearest the fault should open up and stop the fault with limited interruption to the rest of the system. What about when this doesn't happen? Breaker failure is the situation that takes place when a circuit breaker, usually at medium voltage, fails to open when sent a command from a relay. This means that although a trip command was sent to the breaker, it didn't open up in the time frame it should have. Do you need to plan for breaker failure in your designs? That's a tough question. In a well-coordinated system like the one shown in Figure 1, the upstream breaker should open up after some time even if the downstream breaker fails. If the system is designed for all components to withstand that downstream fault magnitude for the duration of time it takes the upstream breaker to trip, you are probably okay. However, this may not be the case. You may have conductors and/or equipment that are designed only to withstand faults with a clearing time from their local upstream overcurrent protection device (like the breaker feeding LOAD2 in Figure 1). In that case, you'll need to make use of breaker failure protection. Figure 1: Breaker Failure Example How does it work? - Breaker failure protection can be implemented in a number of ways, but the simplest is a 2-step check: Has a trip command been issued by the controlling relay/trip unit? After some time delay from that trip (for the breaker to pick up the command and operate), is the breaker status still closed? If the answer to both questions is yes, then the controlling relay/trip unit detects that the breaker has failed. This detection usually corresponds to an auxiliary contact on the relay changing states from open to closed. That contact can be hardwired into other relays or controllers, or sent over a network communication like Modbus TCP/IP back to a control system. Hardwired signals are often used to trip upstream breakers quickly when breaker failure is detected. This lets time-current curves remain well-coordinated with delay gaps, but ensures a single point of failure doesn't take out the entire system. A network communication isn't as useful for protecting a system because of processing delays, but it does let an operator know what took place and why the upstream breaker was opened instead of the downstream. Otherwise, investigations into the fault may not start in the right place; operators could be confused by the upstream breaker tripping and assume a different location for the fault than reality.
- Three-Phase Power
The dominant form of power generation, transmission, and distribution in the United States is via three-phase. Three-phase power uses three separate, energized conductors, 120° apart in phase to transmit power more efficiently than single-phase (line and neutral) power. The cost-effectiveness of three-phase power makes it worth its while, but the added complexity over single-phase power adds some challenges. Why Three-Phase Power? - The benefits of three-phase power can be summarized succinctly: Three-phase power is better than single phase because it lets us transmit three times the power of single phase with one more wire. In other words, we get 200% added benefit for 50% added cost. That's a pretty great deal! How does this work? Well, it comes back to the phase difference between voltages. Thanks to the fact that the voltages are all shifted by 120°, they algebraically sum to zero. This means that the current from all three phases will also algebraically sum to zero by Ohm's Law. In other words, there doesn't need to be a dedicated "return" conductor, like on a single-phase system. The phasor diagram below explains this in more detail. Phasor Diagram of Three Voltages or Currents in a 3-Phase System Another benefit of three-phase power is the steadiness of power supplied. While single phase power is volatile, alternating rapidly between its peak value and 0, three-phase power is much more uniform. Thanks to the fact that three-phase sources have different phases, the power received by the load looks much closer to constant. This is a big benefit for things like motors. This might make you think, "If three-phase is better than single phase, can I just keep increasing the phases to make a better power system?" Unfortunately, not really. Higher phase count systems will be able to transmit more power, but they rely on the same physics as the three phase system. This means we end up needing 9 wires to transport 9x the power of single phase. Our per-unit cost of transporting power is still the same as three-phase but we've now increased the complexity. That's no good. Wye and Delta Connections - When we talk about single-phase power, its easy to understand how things wire together: every source and load has a positive and a negative. Make sure that everything connects with the right polarity and you're good to go. With three-phase power, connections look different. A three-phase component (whether a source or a load) can be visualized by three single-phase elements wired together. There are two options for this connection method: Wye and Delta. The images below show both wiring options. The Wye connection, also known as the T-connection, has a center point common to all elements. This center point is known as the neutral point and provides a common reference for the A, B, and C phases. The delta connection is configured differently. Notice that there is no common neutral point for all three phases on the delta system. Wye Connected Elements with Neutral Point N Delta Connected Elements Some terms that are used when referring to three-phase (wye and delta) connections are as follows: The line-to-line voltage VL is the voltage between two phases (e.g. A to B). The line-to-line voltages on a balanced three-phase system will all have the same magnitude but be shifted in phase by 120°. The line-to-neutral voltage VN is the voltage between one phase and the neutral point of the source or load (if applicable). Like the line-to-line voltage, the line-to-neutral voltages in the system will all have the same magnitude but be shifted in phase by 120°. The magnitude of the line-to-neutral voltage | VN | is related to the magnitude of the line-to-line voltage | VL | by: | VL | = √(3) | VL | The line current IL is the current flowing on the conductors connecting sources to loads. Line currents will all have the same magnitude but be shifted in phase by 120°. The phase current IP is the current that flows through the internal elements of a source or load. Phase currents will all have the same magnitude but be shifted in phase by 120°. On a Wye-connected system, the line current is equal to the phase current by Kirchoff's Current Law. On a delta-connected system, the line current magnitude is related to the phase current magnitude by: | IL | = √(3) | IP | Summary of Variables for Three-Phase System Complex Power with Three-Phase - The apparent power flowing in a three-phase system can be computed as follows, regardless of wye or delta connections: S = √(3) VL IL The real power and reactive power absorbed by a load can then be determined using the power factor of that load: P = S cos(θ) and Q = S sin(θ) All of these equations assume a typical, balanced three-phase system. By balanced, we mean that the loads have an equal impedance on all three branches and the sources have an equal supply voltage. Per-Phase Analysis - With a balanced system, each phase behaves with the same magnitudes of voltages, currents, and impedances. The only difference between each phase is a shift of 120°. Instead of worrying about evaluating circuits on a three-phase basis, we can simplify things back down to a "per-phase" analysis by recognizing this symmetry. Once we solve for all variables on one phase, we know that it is descriptive of the entire system. In order to complete a per-phase analysis, we must ensure that all loads and sources are in their wye-connection equivalents. This means that all voltages must be considered as line-to-neutral and all loads that are in a delta form must be converted to a wye form. In theory, this is a laborious task, but in practice we don't usually need to get into this level of detail. Most loads are specified in terms of their rated load currents instead of their impedances. Example: A wye-connected load is rated for 480V line-to-line and has a load current of 10A. What is the line-to-neutral voltage across the load? What is the apparent power drawn by the load? Solution: The line-to-neutral voltage is found by taking the line-to-line voltage and dividing by √(3). | VN | = 480 / √(3) = 277 V The apparent power of the three-phase load can also be directly calculated: S = √(3) VL IL = √(3) * 480 * 10 = 8.31 kVA
- How to Become an Electrical Engineer
Growing up, I didn't know what I wanted to be. I liked math and science, and, as I got older, I realized that I particularly liked physics. I went to study at the University of Kansas and wasn't sure what to declare my major in but decided to go with electrical engineering. I thought it would be a field that would let me use my knowledge and passion for physics to make the world a better place, and I was right! If you're looking for some advice on how to become an electrical engineer, you've come to the right place. First things first, though, we have to understand that electrical engineering is a broad field. Breaker & Fuse is all about electrical power systems, the large-scale world of electricity that brings power from point A to point B safely, but there's more to electrical engineering than just that. Electrical engineering also covers areas like communications (radio, cellular, etc.), electronic hardware, and computing. Electrical engineers do work in just about every industry and their areas of expertise overlap with other engineering disciplines. Education - To become an electrical engineer of any sort, the minimum requirement is to get an ABET accredited undergraduate degree in electrical engineering. ABET accredited universities are all over the place, and I don't just recommend it because you'll likely get a better education; In order to become a licensed Professional Engineer, you'll have to. Undergraduate programs in electrical engineering are broad and thanks to ABET accreditation requirements, they generally have very similar requirements. Here's an example of some typical electrical engineering courses you would be required to take: DC Circuits AC Circuits Electronic Circuits Digital Electronics and Logic Programming Communication Systems Control Systems Signal Processing Electromagnetics Along with these major-specific courses, there are other courses that are almost universally required to become an engineering graduate: Calculus (Single Variable and Multi-Variable) Linear Algebra Differential Equations Statistics Economics Physics (Classical Mechanics) The remainder of the classes that you take as an undergraduate are highly dependent on the university you attend and the electives that you choose to take on. Graduate education in electrical engineering is also common for specialization. The available graduate courses at a college will vary considerably based on the staff who can teach them. At the University of Kansas, I was able to specialize in power systems through graduate coursework. As an example, here are some of the classes I took: Power Systems Engineering I & II (All about the National Electrical Code and Real-World Design) Power Systems Analysis (short circuit, load flow, and other power calculations) Electric Energy Production and Storage (Power generation, renewables, and specialty topics) Power Electronics (Inverters, Converters, and more) Motors and Motor Control Graduate education is not required to become a licensed electrical engineer, but it may be used as qualifying experience towards licensure if it results in a degree and it's usually viewed favorably by employers. Licensure - Once you've gone through school and got the degree, a job as an electrical engineer is the next step. This is where things start to get a bit confusing, though. While you may be employed as an electrical engineer, there are limits to what you can do. Until you become a Professional Engineer (PE), a licensed and registered practitioner of engineering, you cannot actually stamp and approve your own drawings to be used for construction in a public setting. There are four things that you have to do, in order, to get that PE license: You must graduate from an ABET-accredited university with a degree in engineering. You must pass the Fundamentals of Engineering Exam (FE), a test of all the things you must learn in an ABET-accredited program. You must pass the Professional Engineering Examination in the electrical discipline of your choice (Electrical Power, in the case of those who would usually be reading this article). This test is much more difficult than the FE and will require a detailed understanding of the industry and NEC. You must have at least four years of qualifying work experience, with recommendations by applicable Professional Engineers, and receive approval from the state licensing board. There are other options here that can lead to licensure as well, but they are more obscure and should be reviewed on a state-by-state basis. In no state can one become a licensed PE without at least four years of work experience following graduation. Once all of that is done, you will be a fully licensed Professional Electrical Engineer. It's a lot of work, no doubt, but Professional Electrical Engineers have a big responsibility, making sure that the power infrastructure all around us is safe and effective!
- Panelboards, Switchgear, and Switchboards (NEC 408)
Panelboards, like the one pictured below, can be found in virtually all buildings with electrical service. Switchgear and switchboards are much less familiar for the lay person, but they all serve the same purpose: taking an incoming power feed and splitting it up into smaller circuits protected by overcurrent devices. The panelboard below uses molded case circuit breakers for protection of branch circuits. Panelboards, switchgear, and switchboards are all defined in NEC Article 100. The distinctions may not be immediately obvious. Small panels (in terms of both size and current ratings) that are accessible only from one side are usually called panelboards and large, standalone, metal-enclosed assemblies are called switchgear. Switchgear generally range in ratings from 800A to 6000A. Switchboards are somewhere between the two. In the 2020 edition of the National Electrical Code, Article 408 addresses panelboards, switchgear, and switchboards. A Typical Electrical Panelboard for Residential Applications High-Level Schematic of a Typical Panelboard with Circuit Breakers Panelboards are required to be protected with an overcurrent device rated for no more than their nameplate rating. For example, a 300A or 400A fuse could be used to protect a panelboard, but not a 500A fuse. Switchboards and switchgear are not subject to this requirement, but it's still a good design practice. Padmount Switchgear Installed in a Public Location Neither Article 408 nor Article 215 (Feeders) requires that the feeders supplying the panelboard, switchgear, or switchboard have ampacity sufficient to carry the nameplate of the device. Instead, feeders are only required to carry the load anticipated to be fed through the panelboard (125% of continuous + 100% of noncontinuous). It may be a good choice to design for the nameplate of the panel if additional growth is expected for the installation. Additionally, remember that conductors must be protected in accordance with their ampacity. If a conductor is sized to carry a load less than the panel/switchgear rating, then overcurrent protection must be provided to limit the conductor from being used at a higher rating. The National Electrical Code used to place a limit on the number of overcurrent protective devices that could be installed on panelboards, but that language has been modified in more recent editions of the Code. Now, the limit is by the manufacturer. Depending on the load served from the system, it may be reasonable to limit the sum of the downstream overcurrent protection trip ratings to be no more than the panel rating. However, this is often an unnecessarily conservative practice due to the limited number of standard overcurrent protective device ratings. It's common for very small loads in a home to all be supplied by 15A and 20A breakers, totaling to a downstream collective rating much larger than the sum of the panel. If the system is properly designed for coordination and overload protection, then there is no issue with having a higher total downstream trip rating than the main trip rating for the panel/switchboard/switchgear.
- Conductor Ampacity (NEC 310 and 311)
Ampacity refers to the amount of current (in Amperes) that can be carried by a particular conductor at a particular operating temperature. Ampacity is determined based on the size of the conductor (how much metal is available to carry current) and the installation conditions (how easily the heat generated by the current can dissipate out to its surroundings). Determining ampacity is one of the most important calculations in electrical design. If a conductor with too low of an ampacity is selected, the installation will be dangerous - the conductor or equipment the conductor is connected to could overheat and start a fire. On the other hand, upsizing conductors unnecessarily leads to additional costs and may make projects too expensive. The 2020 National Electrical Code addresses ampacity calculations in Articles 310 and 311. The NEC method for determining ampacity is as follows: Determine the right table(s) to use, reflecting your installation conditions. NEC 310.16 is the most common table used for low voltage installations. Determine the temperature rating of your conductor. Typically 60°C, 75°C, 90°C, or 105°C. The conductor temperature rating is related to the naming convention as described in NEC 310 and 311. The maximum operating temperature often varies depending on whether or not the location of use is wet or dry. Determine if there are any additional considerations that warrant utilizing a lower temperature for any segment of your ampacity calculation. Terminations are often rated lower than conductors. 60°C terminations are the minimum for low voltage equipment less than 100A and 75°C terminations are the minimum for low voltage equipment over 100A. Medium voltage terminations are almost always rated for 90°C. NEC Article 110 has more information on these requirements. Is the raceway suitable for the temperature you are operating at? Raceways may not be rated for 105 degree Celsius. Is additional margin required per your client or project needs? Are there any other special circumstances to be aware of? Derate the ampacity tables based on the conditions of use throughout the run. Derating factors are numbers that multiply against table ampacities to modify them for specific conditions. The 4 steps above seem pretty straightforward, but derating can make things complicated. The primary derating factors described by the NEC are bundling, ambient temperature, and burial depth. Here’s a deeper-dive on each one: Bundling - when more than 3 current-carrying conductors are routed together without spacing for more than 24”, the NEC requires their ampacity to be derated based on exactly how many current-carrying conductors are in one area. The values for the bundling derating factor, x, are as follows: 4-6 conductors: x = .8 7-9 conductors: x = .7 10-20 conductors: x = .5 21-30 conductors: x = .45 31-40 conductors: x = .4 41 or more conductors: x = .35 Since ampacity tables in the NEC are all based on no more than 3 current-carrying conductors in each installation, it makes sense that additional derating is required for more wires bundled together. Additional conductors means additional heat. When bundled together this heat can’t easily be dissipated, leading to increased operating temperatures. Ambient Temperature - When the air or soil surrounding a cable is warmer than the conditions listed in a table, the conductor won’t be able to carry as much current before reaching its temperature rating. Conversely, when the surroundings are cooler, the conductor can actually carry more current before the temperature limit is reached. The mathematical formula for the derating is: y = √(TC-TA) / √(TC-TR) Where: y is the ambient temperature derating factor TC is the conductor operating temperature limit in °C, as adjusted for any special conditions (including terminations) described above TA is the actual ambient air or soil temperature in °C TR is the reference ambient temperature in °C used in the relevant ampacity table from the NEC A temperature derating factor greater than 1 is permissible. Derating for ambient temperature may be based on the conductor temperature instead of the termination temperature if the conductor temperature is higher. Burial Depth - When a conductor is buried deeper than the conditions assumed for the NEC ampacity tables, the ampacity has to be decreased. The NEC does not allow a burial depth derating factor greater than 1. In particular, the NEC notes that this requires a derating of 6% per foot of difference in burial depth. Mathematically, this means: z = .94 ^ ((DB-DR) / 12) Where: z is the burial depth derating factor DB is the burial depth in inches DR is the reference condition burial depth in inches from the relevant ampacity table from the NEC Putting It All Together- The ampacity of a conductor A’ can be calculated: A' = x y z A Where A is the appropriate NEC table’s reference ampacity before correction factors at the conductor operating temperature limit (as adjusted down for terminations). A should be carefully selected for the correct design conditions as described in the 4-step process above. For multiple conditions of use throughout the circuit, the above equation can be extended to several values of A and its related derating factors. A must be correctly selected to ensure overheating does not occur at the point of termination. Low voltage equipment must utilize NEC 310.16 for A. Medium voltage equipment may use the conditions of use tables throughout the circuit. Example: What is the ampacity of (20) 8 AWG CU XHHW-2 current-carrying conductors run together through conduit in an outdoor area that sees ambient temperatures as high as 40°C? The equipment the wires terminate on have standard temperature ratings. Solution: To solve this problem, we’ll follow the 4-step process outlined above and draw a picture to help ourselves understand the situation. Step 1 is to determine the right NEC table to use. Low voltage conductors like XHHW-2 are covered by Article 310. Since the conductors are run in conduit, NEC Table 310.16 applies. Step 2 is to determine the temperature rating of our conductor. XHHW-2 is rated for 90°C in wet or dry locations per NEC 310. The ampacity of XHHW-2 8 AWG at 90°C is 55A. Step 3 is to determine if there are any special conditions that warrant limiting our ampacity to a lower temperature. Standard low voltage equipment under 100A has 60°C termination temperature limitations. The ampacity of conductors at 60°C is 40A, which we shall limit ourselves to instead. Step 4 is to derate the reference ampacity for the conditions of use. We don’t have to consider a burial depth derating factor since the conductors are run through conduit in air-not underground. The ambient temperature derating factor should be computed since the ambient air temperature could be as high as 40°C and NEC 310.16 assumes an ambient temperature of 30°C. The value of y can be computed against the 60°C operating limit of XHHW-2 because of the terminations: y = √(60°C - 40°C) / √(60°C - 30°C) = .816 The bundling derating factor is also relevant since 20 conductors will be routed together in a single conduit. For 10-20 current-carrying conductors, x = .5. The ampacity A’ can then be computed: A’ = A x y = 40A * .816* .5 = 16.32A