[ieee 2008 ieee/semi advanced semiconductor manufacturing conference (asmc) - cambridge, ma, usa...

6
Quantifying Loading Efficiency Losses on Lithography Clusters Jason Foster, Milind Mohile, John Matthews Qimonda NA Corp 6000 Technology Blvd, Sandston VA 23150 USA [email protected], [email protected], [email protected] ABSTRACT The high cost of Lithography clusters requires fabs to continuously work to optimize their utilization and output. Unfortunately, due to their highly complex nature and the parallel processing capability of the cluster, accurately determining the utilization and output detractors can be very difficult. Without the ability to accurately identify and quantify the equipment capacity loss it is obviously very difficult to improve the equipment performance, as well as to synchronize actual equipment performance with capacity planning. At Qimonda, we have measured several speed losses on our Lithography clusters with interrupts to takt time consistently being the largest (see Fig.1). Interrupt Losses can be divided into two general categories of causes: equipment alarms/aborts and loading efficiency. 300mm Speed Curve Summary for Litography Tool Set 0% 100% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Casc Level (lots) % of Steady State Speed SS Golden Curve Equip Loss Curve Interrupt Loss Curve Recipe Change Loss Actual Curve (Avg) Interrupt loss is the primary speedloss Fig. 1. Speed loss example illustrating the impact of Interrupt losses. This paper proposes a method for separating the tool takt interrupts into those losses caused by loading efficiency and those losses caused by equipment alarms/aborts. Actual results are summarized based on applying the method to both scanner clusters and stepper clusters. In general the method can be applied to any tool with parallel processing capability. INTRODUCTION AND SPEED LOSS OVERVIEW To ensure a consistent understanding by the reader, we will first discuss a brief overview of our definitions of speeds losses. The speed analysis begins by first measuring and establishing the OEE baseline speed. Using lot-end to lot-end events from tool log files, we create lot takt times per recipe per tool. When consecutive lots run the same recipe and the next lot begins before the previous lot ends, the corresponding takt times for the cascaded lots are put into a steady-state, takt-time population. Once down events are removed, we select the 5 th percentile takt time from the steady-state, takt-time population, which corresponds to the 95 th percentile WPH. The 95 th percentile performance of the best tool (aka, the Golden Tool) becomes the OEE baseline speed (also referred to as the steady-state, maximum speed) for the given tool type and recipe. All speed losses are measured relative to this Golden Tool baseline. The gap between the 95 th percentile speed of the Golden Tool and the actual, average speed of all tools in the toolset (per recipe) is the total speed loss for the tool set. Once the total speed loss has been quantified, the next objective is to segregate the loss into one of five loss categories, which facilitates root cause analysis. Equipment and Process Loss is defined as the percentage difference between the measured average steady state speed for the entire toolset and the measured steady state of the Golden Tool (refer to Graph 1 for a Lithography example). Interrupt Loss is defined as the percentage difference between the measured steady state of a given tool and its average recipe train speed, for a given recipe. The overall interrupt loss is the average of the interrupt loss across all tools in the toolset and all recipes. 179 978-1-4244-1965-4/08/$25.00 ©2008 IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Upload: john-matthews

Post on 28-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

A S M C 2 0 0 8

Quantifying Loading Efficiency Losses on Lithography Clusters Jason Foster, Milind Mohile, John Matthews Qimonda NA Corp

6000 Technology Blvd, Sandston VA 23150 USA [email protected], [email protected], [email protected]

ABSTRACT The high cost of Lithography clusters

requires fabs to continuously work to optimize their utilization and output. Unfortunately, due to their highly complex nature and the parallel processing capability of the cluster, accurately determining the utilization and output detractors can be very difficult. Without the ability to accurately identify and quantify the equipment capacity loss it is obviously very difficult to improve the equipment performance, as well as to synchronize actual equipment performance with capacity planning. At Qimonda, we have measured several speed losses on our Lithography clusters with interrupts to takt time consistently being the largest (see Fig.1). Interrupt Losses can be divided into two general categories of causes: equipment alarms/aborts and loading efficiency.

300mm Speed Curve Summary for Litography Tool Set

0%

100%

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Casc Level (lots)

% o

f S

tea

dy

Sta

te S

pee

d

SS Golden Curve Equip Loss CurveInterrupt Loss Curve Recipe Change Loss Actual Curve (Avg)

Interrupt loss is the primary speedloss

Fig. 1. Speed loss example illustrating the impact of Interrupt losses.

This paper proposes a method for separating the tool takt interrupts into those losses caused by loading efficiency and those losses caused by equipment alarms/aborts. Actual results are summarized based on applying the method to both scanner clusters and stepper clusters. In general the method can be applied to any tool with parallel processing capability.

INTRODUCTION AND SPEED LOSS OVERVIEW

To ensure a consistent understanding by the reader, we will first discuss a brief overview of our definitions of speeds losses. The speed analysis begins by first measuring and establishing the OEE baseline speed. Using lot-end to lot-end events from tool log files, we create lot takt times per recipe per tool. When consecutive lots run the same recipe and the next lot begins before the previous lot ends, the corresponding takt times for the cascaded lots are put into a steady-state, takt-time population. Once down events are removed, we select the 5th percentile takt time from the steady-state, takt-time population, which corresponds to the 95th percentile WPH. The 95th percentile performance of the best tool (aka, the Golden Tool) becomes the OEE baseline speed (also referred to as the steady-state, maximum speed) for the given tool type and recipe. All speed losses are measured relative to this Golden Tool baseline. The gap between the 95th percentile speed of the Golden Tool and the actual, average speed of all tools in the toolset (per recipe) is the total speed loss for the tool set.

Once the total speed loss has been quantified, the next objective is to segregate the loss into one of five loss categories, which facilitates root cause analysis.

• Equipment and Process Loss is defined as the percentage difference between the measured average steady state speed for the entire toolset and the measured steady state of the Golden Tool (refer to Graph 1 for a Lithography example).

• Interrupt Loss is defined as the percentage difference between the measured steady state of a given tool and its average recipe train speed, for a given recipe. The overall interrupt loss is the average of the interrupt loss across all tools in the toolset and all recipes.

179978-1-4244-1965-4/08/$25.00 ©2008 IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Page 2: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

• Recipe Change Loss is defined as the percentage difference between the takt time of the first lot in a recipe chain and the average takt times of the successive lots in the same recipe chain factored by the frequency of recipe changes.

• Batch Size Loss is defined as the percentage difference between the maximum batch size and the average batch size (refer to Figure 2 for a Wet Bench example).

• Lot Discontinuity Loss is defined as the difference in speed from where the tool (or toolset) is operating on the speed curve to the asymptote of the curve. The speed curve is created by calculating the fill time of the tool, and then averaging that fill time across successively longer lot cascades. The fill time is calculated as the difference between the RTT and the steady-state (5th percentile) takt time. The average cascade length corresponds to where the tool (or toolset) is operating on the speed curve. The general formula for creating the curve is: Speed (WPH) =60/ (Takt Time+Fill Time/Cascade Train)*25, assuming a 25 wafer lot.

300mm Speed Curve Summary Wet Bench Tool

Set

0%

100%

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Casc Level (Batches)

% o

f S

tea

dy S

tate

SS Golden Curve Equip/Proc LossInterrupt Loss Recipe Change Loss Batchsize LossActual Curve (Avg) Cascade Size

Figure 2 – Example of a wet bench speed loss summary with all loss categories illustrated (Recipe Change loss = 0%).

INTERRUPT LOSS The initial intention with identifying

Interrupt loss was to isolate the impact of alarms and aborts on the tool speed. However, we have found that on many tool types there is a high impact from lot availability or loading efficiency. For example, during steady state production a litho cluster should have 2 or 3 lots in process. If a cluster has only one lot finishing

processing at the exposure tool, and a new lot starts at the coater, the exposure tool will go idle for several minutes, the cluster stays in a production state, and there will be a resulting longer, lot to lot takt time between these two lots as they finish processing on the cluster.

Figure 3 illustrates several examples of loading efficiency impacting Interrupt Loss. To maintain steady state production, the next lot needs to start ~16 minutes after the previous lot, and when this occurs the average takt time is ~ .26 hrs. However, the highlighted lots start ~22 minutes after the previous lot, and the resulting takt times

MOVE_OUT WAFER_OUT

RUN TIME Lot Start Start

Delta Lot End Casc Tact Time

11/1/06 6:01 25.00 0.61 11/1/06 5:24:48 11/1/06 6:01 1.0011/1/06 6:16 25.00 0.85 11/1/06 5:25:52 1.07 11/1/06 6:16 2.00 0.26011/1/06 6:32 25.00 0.83 11/1/06 5:42:35 16.72 11/1/06 6:32 3.00 0.26211/1/06 6:50 25.00 0.76 11/1/06 6:05:05 22.50 11/1/06 6:50 4.00 0.30111/1/06 7:05 25.00 0.74 11/1/06 6:21:29 16.40 11/1/06 7:05 5.00 0.25511/1/06 7:19 25.00 0.96 11/1/06 6:22:05 0.60 11/1/06 7:19 6.00 0.22611/1/06 7:33 25.00 0.91 11/1/06 6:39:04 16.98 11/1/06 7:33 7.00 0.23311/1/06 7:46 25.00 0.87 11/1/06 6:54:35 15.52 11/1/06 7:46 8.00 0.22411/1/06 8:03 25.00 0.89 11/1/06 7:10:31 15.93 11/1/06 8:03 9.00 0.28011/1/06 8:20 25.00 0.86 11/1/06 7:28:37 18.10 11/1/06 8:20 10.00 0.27111/1/06 8:35 23.00 0.87 11/1/06 7:43:36 14.98 11/1/06 8:35 11.00 0.26511/1/06 8:52 25.00 0.83 11/1/06 8:02:45 19.15 11/1/06 8:52 12.00 0.27211/1/06 9:07 25.00 0.98 11/1/06 8:08:39 5.90 11/1/06 9:07 13.00 0.25711/1/06 9:28 25.00 0.95 11/1/06 8:31:30 22.85 11/1/06 9:28 14.00 0.35111/1/06 9:46 25.00 1.07 11/1/06 8:42:10 10.67 11/1/06 9:46 15.00 0.292

11/1/06 10:01 25.00 1.09 11/1/06 8:56:33 14.38 11/1/06 10:01 16.00 0.263 Figure 3 – Loading Inefficiency Increases lot-to-lot takt time, thereby reducing speed from this delay are 0.3 hrs and 0.35 hrs. Since these two lots were not started in time to maintain steady state speed, meaning they were not efficiently loaded (whether due to operational/logistical issues or WIP availability issues is not clear at this point), the average cascaded takt time increased 3.4% from 0.26 to 0.27 hrs. This speed loss was not the result of tool alarms, but was the result of the WIP not being available or not being loaded efficiently to sustain steady state production on the tool.

Separating the WIP/Loading portion of Interrupt Loss from the alarm portion is what our analysis method focuses on. It is essential for the Process and Equipment engineers to understand the impact of the alarms on the tools speed. Likewise, it is essential for Manufacturing Operations and Capacity Planning to understand how much “idle” is being disguised as production time speed loss.

IDEA BEHIND LOADING LOSS

The speed loss discussion in [give reference to earlier speed loss paper] defines interrupt loss as the difference between best speed and an average speed a tool has achieved when running a cascade (i.e., continuous train) of lots of the same recipe.

180 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Page 3: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Typically the tool cannot achieve the best speed every time, because of a variety of reasons. Besides random variations, various interruptions and errors in processing, alarms triggered by SPC being among the top reasons, are why the average speed is usually lower. In the case of sequential lot processing tools, such as photolithography tools and wet benches, delays in loading a lot on the tool can also contribute towards interrupt loss.

A photolithography (henceforth referred to as “litho”) tool consists of lot loading port(s), an optical module known as scanner (or stepper) and series of modules that deposit photo-sensitive chemicals on wafers and then develop these chemicals after wafers are exposed to light. Part of the litho tool that deposits photo-sensitive material on wafer before the scanner is sometimes called “coat-side track”. And, modules of the tool after the scanner that develops patterns laid out on the wafer after being exposed in the scanner is called “develop-side track”. As the wafers are picked up from the loading port, they travel through various modules in the coat-side track, before getting into the scanner. Wafers are exposed in the scanner and then unloaded onto the develop-side track. After getting processed through various modules in the develop-side, wafers go back into their respective lot boxes on the loading port. In most modern litho tools, there are multiple load-ports and coat and developing modules. However, it is not yet the case that there are multiple scanners in the cluster, which is most expensive piece of any litho tool. With multiple load ports, coat and develop modules and usually a single scanner, any litho tool is like a small production line. With proper configuration of these elements, a good line balancing can be achieved to get maximum possible utilization of the “bottleneck” element.

When this balancing is achieved, and when the litho system is working at its optimal speed, a certain number of wafers need to stay in the litho tool to ensure continuous flow into the bottleneck element. When the bottleneck element of the litho tool is not fed despite enough WIP being available to load on the tool, there is a gap generated between wafers that are processed in the bottleneck element. This processing time gap contributes towards lowering processing speed of the tool, and thus adds into Interrupt loss as we measure it. We

intend to quantify a portion of this processing time gap in the interrupt loss identified for speed loss analysis. Since this processing time gap is generated, because of gaps in loading lots onto the tool, we call this Loading loss.

LOADING LOSS CALCULATION A. Defining Optimal Wafer Count

Optimal wafer count in the tool is the number of wafers needed in the tool to ensure continuous flow into the bottleneck element of the tool.

To determine the optimal wafer count, first the bottleneck element of the tool needs to be identified. In the case of litho tools, the scanner/stepper is the most likely candidate. Except for a few exposure processes, most of the processes need a number of exposure shots that slows them down enough to exceed any other process in any other module in the litho tool. By simple Gantt-charting of track and exposure process times of all individual modules, the bottleneck element can be confirmed. In the case of wet benches, though, there may not be an obvious element or “tank” that can be identified as a bottleneck element. Depending on what the tank configuration is and how much time a batch of wafers takes in each of those tanks will define the bottleneck element. The sequences in which different processes are run on these wet benches can also lead to different critical path possibilities.

After a bottleneck element is identified, wafer distribution in the tool is identified to keep this bottleneck running all the time. The total number of wafers needed to achieve this distribution is the optimal wafer count needed to avoid any loading loss.

B. Counting Wafers To calculate loading loss, total

numbers of wafers in the tool need to be compared to the optimal wafer count. Unfortunately, this information is not always available. In lieu of that, a practical approach is taken to count wafers in the tool. This approach uses data that is already available and used for speed loss analysis. Lot-start and lot-end timestamps, along with wafer count is already used for speed loss analysis. This data, used with some assumptions, can provide a good approximation of wafer count in the tool, and the resulting Loading loss. First lots are excluded, because the impact of the first lot

181 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Page 4: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

in a cascade is accounted for by the Lot Discontinuity calculation.

Figure 4 shows an instance of several lots being processed on a tool. Let us consider lot #4 and the number of wafers getting processed during this lot. Most Litho clusters process lots in a serial manner, meaning wafer 25 from lot #1 will begin processing before wafer 1 from lot #2 starts. This serial production simplifies the counting of the number of wafers processing in the cluster. For example, referring to Fig. 4, when lot #4 starts it is clear that lot #1 has not yet ended. Therefore, lots #2 and #3 must have all their wafers in process in the cluster. The next step is to determine how many wafers from lot 1 are still in production. With an assumption that all wafers in any lot take an equal amount of time, any elapsed time on the lot is equivalent to wafers from that lot. Dividing the lot processing time by the wafers per lot, creates a time per wafer that can be used to assess how many wafers from lot #1 are still in production. Lot #4 has about 5 minutes of overlap with lot #1, and based on the time per wafer calculation, there will be 2 wafers from lot #1 still in production. As a result, when lot #4 starts production there will be 52 total wafers in production: 25 wafers from lot#3, 25 wafers from lot #2, and 2 wafers from lot #1.

Cascaded Lots Example

1

2

3

4

5

6

7

8

9

Lot P

rodu

ctio

n S

eque

nce

Time (increasing left to right)

When this lot starts, 2 full lots are in the tool, and one partial lot

When this lot starts, only 1 partial lot is in production

Bars represent processing time

Fig. 4. Example of how lots are cascaded on a litho cluster.

Once the method for counting the total wafers in production is determined, the next step is to temper the average by the minimum parallel wafer requirement of the tool. For example, the given litho cluster requires a minimum of 53 (53 being an example, the actual number will vary depending on the tool

configuration) wafers in process in the tool to maintain steady state production. The maximum capacity of the given cluster is over 80 wafers. However, achieving a wafer parallel factor of more than 53 does not increase the speed of the cluster. Unfortunately, whenever the cluster is running in continuous mode with less than 53 wafers, then the tool is losing speed.

To temper the average number of wafers processing in parallel in the cluster, the wafer count is limited by the optimal wafer count. Figure 5 shows the difference between the total number of wafers in the cluster and the wafer count that was used for calculating the wafer parallel factor. The highlighted numbers indicate that the cluster frequently has many more wafers than are needed to maintain steady state speed. However, since having more than the minimum optimal wafer count does not increase speed, these higher wafer counts cannot be used to increase the average wafer parallel factor. For example, using the actual number of wafers processing, the average wafer count in DUV_01 was 62.7 wafers, which is higher than the minimum number of optimal wafers, and would thereby indicate that there was never any loading loss on the tool. Based on direct observation, we know that the tool suffers periodic loading losses, so this basic average cannot be the correct number.

Lot Start Lot End Wafers

Wfrs for Loading

LossParallel

Wfrs Total

02/01/08 5:10 2/1/08 6:06 25.00 53.0 88.602/01/08 5:20 2/1/08 6:19 25.00 53.0 90.002/01/08 5:42 2/1/08 6:46 13.00 53.0 69.402/01/08 6:11 2/1/08 7:26 25.00 24.7 24.702/01/08 5:38 2/1/08 7:35 23.00 53.0 86.302/01/08 6:18 2/1/08 8:02 25.00 53.0 79.502/01/08 6:23 2/1/08 8:03 1.00 53.0 86.002/01/08 7:17 2/1/08 8:11 25.00 53.0 70.102/01/08 7:37 2/1/08 8:33 25.00 41.2 41.202/01/08 7:38 2/1/08 8:42 22.00 53.0 72.8

Fig. 5. Example of total wafers processing versus the wafer count used to average the wafer parallel factor.

182 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Page 5: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

C. Weighting the Wafer Count The final part of the calculation is

weighting the lot wafer counts by a time factor. The current method is a type of sampling technique where the data source is a lot event log file, which facilitates wafer counting at a lot event level. However, this type of sampling equalizes all lot events, meaning short duration events get the same “weight” as long duration events. However, what we need our wafer parallel factor to represent is the average state of the cluster over time, so long duration events should affect the cluster state more then short duration events.

For example, at the start of a given lot, if the cluster only has a few wafers in process, that state will change quickly as the current lot starts processing. So, the cluster status of being in a low wafer state is a relatively short duration state. In contrast, at the start of a given lot, if the cluster has many wafers in process, this high wafer state will continue as the current lot starts processing and the cluster will take a correspondingly longer time to reduce the wafer count or change its state.

The method we used to weight the wafer count is to calculate the total overlap time per lot. The overlap time is defined as sum of the cascaded lots’ takt times (see Fig. 6). For each lot, sum the takt times for each lot that is cascaded with the current lot to get the appropriate time weighting for each lot’s wafer count.

Lot Start Lot End Wafers

Wfrs for Loading

LossOverlap

TimeSum-

product

02/01/08 5:10 2/1/08 6:06 25 53.0 46.7 2474.202/01/08 5:20 2/1/08 6:19 25 53.0 55.1 2920.302/01/08 5:42 2/1/08 6:46 13 53.0 54.9 2910.602/01/08 6:11 2/1/08 7:26 25 24.7 67.5 1668.602/01/08 5:38 2/1/08 7:35 23 53.0 89.4 4738.202/01/08 6:18 2/1/08 8:02 25 53.0 103.3 5474.002/01/08 6:23 2/1/08 8:03 1 53.0 76.3 4043.902/01/08 7:17 2/1/08 8:11 25 53.0 45.0 2383.202/01/08 7:37 2/1/08 8:33 25 41.2 31.1 1280.202/01/08 7:38 2/1/08 8:42 22 53.0 40.5 2147.4

Fig. 6. Calculation of the Overlap time per lot for weighting the wafer count per lot

D. Calculating the Wafer Parallel Factor and Loading Loss

The wafer parallel factor can now be calculated using the tempered wafer count weighted by the overlap time. The product created from multiplying tempered wafer count by the overlap time (Figure 6) is summed across all lots and divided by the total time to get the wafer parallel factor. For example, using the wafer count tempered by the minimum optimal wafer limit of 53 and weighted by the overlap time, the wafer parallel factor is 43.3 (versus the 62.7 result calculated as a straight average of the lot wafer counts), which corresponds to the observed loading loss on the tools.

Now that we have calculated a wafer parallel factor that accurately captures the average state of the tool when it is loaded, we can use this number to calculate the loading loss. The steady state takt of the tool corresponds to the minimum optimal wafer count, which was 53 wafers in previous examples. The takt is directly proportional to the number of wafers in the tool (when running in continuous mode), so the fewer the wafers in the tool, the longer the takt times between wafers on average, and the slower the tool speed. Therefore, the ratio of the wafer parallel factor to the minimum optimal wafer count describes the Loading loss. To put this in a formula and an example:

Loading loss = 1-WPF/MOWC Loading loss = 1-42.3/53 = 20.1% , where WPF is wafer parallel factor, and MOWC is minimum optimal wafer count.

SUMMARY In going through the process of

developing a method to evaluate the Loading loss impact on our tools, we gained an appreciation for how difficult the phenomenon is to measure. Parallel processing tools like litho clusters or wet benches have loading “windows”. If the next lot is loaded within this window, then the tool can “catch up” and fill the gap in the material flow. This loading

183 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

Page 6: [IEEE 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC) - Cambridge, MA, USA (2008.05.5-2008.05.7)] 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference

window varies with every change in recipe duration, making the exact measurement of the impact very difficult.

By monitoring the average wafers in the tool and creating the wafer parallel factor, we are able to create a metric that provides an indication of how optimally we are keeping the tool loaded. The results from this metric consistently correspond to observations and speed loss results. For example litho tool set A is consistently more heavily loaded than tool set B, and the resulting Loading loss (Figures 7 and 8) and wafer parallel factor calculations indicate that this observed Loading loss is impacting the tool speeds.

Tool Set A

Wafer Parallel Factor

Interrupt Loading

Loss

Total Interrupt

LossLitho_01 49.8 6.0% 23.0%Litho_02 49.3 6.9% 22.2%Litho_03 50.9 4.0% 24.2%Litho_04 49.6 6.3% 25.5% Fig. 7. Loading loss summary for heavily loaded tool set.

Tool Set B

Wafer Parallel Factor

Interrupt Loading

Loss

Total Interrupt

LossLitho_01 43.26667 18.4% 24.6%Litho_01 45.9882 13.2% 20.2%Litho_01 46.79813 11.7% 25.5% Figure 8. Loading loss summary for a tool set with medium loading.

184 2008 IEEE/SEMI Advanced Semiconductor Manufacturing Conference