REPORT FOR ANALYTICAL
CHEMISTS
Guidelines for Interlaboratory Testing Programs RAYMOND H. PIERSON AND EDWARD A. FAY U. S. Naval Ordnance Test Station, China Lake, California
Interlaboratory testing demands careful and extensive planning in order to achieve worth-while and satisfying results. This article, which was presented at the 135th National Meeting of the American Chemical Society at Boston, reviews the statistical and practical problems of planning and conducting an interlaboratory testing program. Possible benefits from such a program are enumerated, and possible pitfalls are pointed out. It is recommended that the planning and coordination of the program should be the responsibility of a single chairman. Qualifications for such a chairman are outlined and practical directions are given for his guidance in organizing the work. Methods of achieving adequate replication, randomization, and symmetry are suggested. Rejection of suspect divergent values at the operator level by a statistical criterion (Q test) is advocated. Attention is called to some statistical aspects of cooperative testing work, such as considerations of bias and confounding, that differ from those commonly encountered in a single laboratory.
TNTERLABORATORY tests, coopera•*· tive test programs, or "round robins"—as the programs are frequently but rather loosely termed— have been and probably will continue to be extensively used by analytical chemists and others in technical fields. If an inexperienced group plunges into a round robin without competent guidance and good planning, the chances are great that the results will be disappointing. Indeed, the results from interlaboratory tests have turned out to be worthless on so many occasions that many people have formed the opinion that the methodology itself has no value. This paper is in defense of the. technique and advocates the use of cooperative testing programs, but only after considerable thought has been given to the
design of the experiment. The interlaboratory test is a good tool when properly applied, but it is an expensive procedure and subject to many pitfalls. Both statistical and practical pitfalls are involved. Many good references provide statistical background and the fundamentals of statistical design applicable to cooperative testing (6, 7, 10, 19, 22), but no book has been written that deals exclusively with this subject and only a few good magazine articles (14,15,20) and pamphlets (2, 3) bear directly on the topic. This article extracts statistical information from the standard references and attempts to organize it for the use of participants in interlaboratory programs, especially those who have not had much previ-
ous experience. The paper includes concepts derived from experiences of the Joint Army-Navy-Air Force Panel on Analytical Chemistry of Solid Propellants and is intended as a practical guide for organizing and conducting round robins. The key person in a cooperative testing program is its chairman. Rarely is it possible to find an ideal chairman with all the necessary background in analytical chemistry and statistical techniques. The main purpose of this paper is to provide guidance for the chairman who does not consider himself an expert at the task, but who, nevertheless, finds himself selected to carry out the project. In the past a good deal of effort has been devoted toward developing standardized approaches to coVOL 3 1 , NO. 12, DECEMBER 1959
·
25 A
REPORT FOR ANALYTICAL CHEMISTS
operative work. Because the field is so complex, this paper advocates a more flexible approach; it provides general guidelines and supplements these with several practical check lists.
Possible Benefits from Interlaboratory Testing
R A Y M O N D H E N R Y PIERSON is a research chemist in the f i e l d of rocket propellants a t the U. S. N a v a l Ordnance Test Station, China Lake, C a l i f . H e was born in Chrisman, III., November 24, 1897. From 1920 t o 1943 he served as a chemist, p r i n c i p a l l y in the analytical phases, f o r the A r t h u r R. Maas Chemical C o . , Union O i l C o . , G i l more O i l C o . , Smith-Emery C o . , A l l i e d Plastics, and A d h e r e , Inc. In 1942 he received his A . B . f r o m the University o f Southern C a l i f o r nia (magna cum l a u d e ) . In 1943 he became Laboratory Superintendent a t the U. S. N a v a l Shipyard a t Terminal Island, C a l i f . , and in 1947 joined the research staff a t the N a v a l Ordnance Test Station. H e is interested in visible, ultrav i o l e t , and infrared spectrophotome t r y and x-ray d i f f r a c t i o n , absorpt i o n , and fluorescence methods as a p p l i e d t o problems o f ordnance chemistry. H e is keenly interested in statistical design and interpretat i o n of experiments. H e is a member of the A m e r i c a n Chemical Society, A m e r i c a n Association for the Advancement of Science, Scientific Research Society of A m e r i c a n Society f o r Testing M a t e rials, and is past chairman of the J o i n t A r m y - N a v y - A i r Force Panel on A n a l y t i c a l Chemistry o f Solid Propellants.
26 A
·
ANALYTICAL CHEMISTRY
For an interlaboratory testing program to be successful, all participants must have a clear conception of the benefits to be derived from the program. Misunderstanding or vagueness regarding the possible benefits has frequently led to disappointment in the results. What then can be gained by subjecting a method of analysis to an interlaboratory test? One important dividend is that each participant can see how well or how poorly his laboratory performed in comparison with a number of other laboratories engaged in similar analytical work. If his laboratory did well or about average, he gains confidence in his own work or that of his assistants. If his laboratory showed wide variability or a high divergence from the general averages, he knows that improvement is needed in techniques, equipment, or supervision at his establishment. These benefits may be had merely by examination of the raw data or graphs showing means and ranges and without benefit of more than elementary statistics. As a second benefit, interlaboratory testing can sometimes be used for sharing the workload when it is desirable to compare a relatively large number of methods or to test a new procedure against several others. This is especially apropos when different instrumental approaches are to be compared, and the required equipment is spread among the various participating laboratories. A block diagram can be set up so that each laboratory tests one specific procedure against one or two others. A third advantage of the interlaboratory test is that it provides a cross-sectional or unprejudiced estimate of the value of a proposed method. The originator of a
method often obtains better results with it than others are able to achieve. There may be several reasons for this difference in evaluation. A properly conducted interlaboratory test may reveal that the originator was too optimistic about the precision and accuracy readily attainable (some personal bias existed), or that considerable practice with the new method is required for good precision. The originator of a method is likely also to be too optimistic regarding the time required for the analysis. A fourth advantage of the interlaboratory test is that it can provide a more realistic average for the time factor. Fifth, when a number of methods are compared, the cooperative work will also show which methods the participants prefer with respect to over-all convenience, space requirements, availability or cost of equipment, type of operator required, and safety. These are factors that the originator may overlook or suppress. Sixth, the program will show whether the descriptions of the methods are adequate or need improvement. In addition to the specific advantages mentioned, it is frequently to be expected that carefully designed interlaboratory work will yield statistical evaluations of precision with respect to a number of factors such as laboratories, operators, days, and levels of ingredient. Similar evaluations with respect to accuracy may be obtainable if standard samples or standard reference methods of known accuracy are available.
Possible Pitfalls in Interlaboratory Testing
Failures in cooperative testing programs may be caused by the following major factors: Benefits derivable from the program not fully understood Chairman not fully qualified for the task and unaware of some of the requirements Objectives not clearly stated and understood Improper selection, preparation, or packaging of samples. Inadequate written instructions from
REPORT FOR ANALYTICAL CHEMISTS
the chairman to the participants Inadequate statistical design Inadequate statistical evaluation The first of these hazards h a s been discussed in some degree in t h e preceding remarks. T h e following sections cover t h e other six hazards and offer a set of guidelines for t h e chairman.
Chairman's Qualifications and Responsibilities Qualifications for a chairman which might be considered as minimum requirements are : At least one university course in elementary probability theory and statistical methods Familiarity with the t test for comparing two means Familiarity with the F test for comparing several means or two variances Ability to use a test for homogeneity of several variances, such as the M test (75, Tables 31 to 33) Familiarity with a test (such as Dixon's Q test) for rejection of data Ability to carry out relatively simple analysis of variance calculations Acquaintance with most of the references of this paper, especially 1-3, H, and 20 If the chairman is not a statistician, he should avail himself of the consulting services of a statistician, preferably one in his own organization In initiating an interlaboratory test, t h e chairman should assume the responsibility for concluding t h e project by t h e preparation and distribution of a complete, formal, written report. A check list for this report is helpful in avoiding omission of essential details. T h e report m a y contain all of t h e following items: 1. A title (and sometimes a serial number) 2. The report date 3. Security classification, if required 4. The name of the chairman and his activity 5. Acknowledgment 6. A list of participating activities 7. A clear statement of the objectives in an itemized form 8. Description of the samples used, including identification, form, and source 9. Description of test methods 10. A summary of the findings and a statement about each of the ob-
11. 12. 13. 14.
15. 16. 17.
jectives, indicating how well the program fulfilled each objective All the raw data and suitable collective or summary tables Appropriate graphs—e.g., means, ranges of replicates or confidence limits for the various laboratories The statistical evaluation of the findings Comments from participants about samples, test methods, forms used for collecting the data, and statistical design Recommendations, when appropriate An appendix showing instructions issued and data forms used A list of literature cited
I t is t h e chairman's responsibility to m a k e sure t h a t t h e needed elements will be available for his r e port. T h e report for an interlabor a t o r y testing program should include a number of elements usually omitted from journal publications. F o r example, articles for ANALYTICAL C H E M I S T R Y
will
con-
tain either a table or a graph b u t not both when they show t h e same thing, and it is usually desirable to omit much of t h e detailed d a t a . On t h e other hand, it is strongly recommended t h a t for reports of interlaboratory tests items 11 and 12 always be included. Space in such reports is not a serious limitation. T h e participants often wish to see all t h e values sent to t h e chairman. T h e y are entitled to this complete information a n d also should have t h e opportunity of examining all the statistical calculations and evaluations which t h e chairman has made. When the group meets for discussion of t h e r e sults, it m a y be advantageous in some cases to use t h e tables and in others t o use t h e graphs. I t e m 14 has great value in improving the proposed method of analysis as a direct benefit of t h e cooperative work and item 16 is of value in t h e planning of future test programs.
Objectives
T h e objectives of t h e program should be defined and agreed upon by t h e participants in t h e early stages of planning, before samples are chosen a n d shipped. Guidance is available in references 2, S, 6, 14, 20, a n d # J .
E D W A R D A L L E N FAY, born in Berkeley, Calif., August 13, 1918, has been a mathematical statistician in the Statistics Branch, Research Department, U. S. Naval Ordnance Test Station, since 1950. H e received his A.B. (mathematics) f r o m the University of C a l i f o r n i a at Berkeley in 1939 and his A . M . (mathematics) a\ H a r v a r d University in 1941. A f t e r graduating, he served for one year each as a teaching assistant in mathematics at the University of Rochester and as an inspector of ordnance material at the Rochester Ordnance District. H e then served in the A r m y for three years. From 1946 t o 1950 he was a graduate student in mathematical statistics at the University of C a l i fornia at Berkeley. His chief area of interest is mathematical statistics, including combinatorial theory, stochastic processes, and techniques of sampling inspection. He is a member of the American M a t h e m a t i c a l Society, Mathematical Association of America, Institute of M a t h e m a t i c a l Statistics, Association for Symbolic Logic, and the Scientific Research Society of America.
VOL. 3 1 , NO. 12, DECEMBER 1959 ·
27 A
REPORT FOR ANALYTICAL CHEMISTS
Courtesy Esso Research & Engineering Co.
THE IDEAL PRESS FOR MAKING KBR PELLETS FOR INFRARED SPECTROSCOPIC ANALYSIS I
A l s o for forming pellets for x - r a y a n d
I
other types of spectroscopic analysis 9
I I
2 0 Ton Capacity Hydraulic Press · Accurate alignment in o p eration • Guided m o v i n g platen • Sturdy 3 column construction
•
Even pressure distribution for uniformly dense pellets
•
Self-contained
9
Bench mounted
9
H a n d operated
• 9 " χ 9 " platen a r e a 9
2 2 " max. adjustable
vertical
opening,
A multi-purpose unit with q u i c k l y adjusted v e r t i c a l opening, by handwheel, per mits many different set-ups in minimum time. Used around the world for RESEARCH, CONTROL & TESTING. I I I I I
Available accessories include: Heating and Cooling Platens; Temp. Controis, Auxiliary low pressure gauges; Fast air closing; Extruding units; Testing units, etc.
I
A v a i l a b l e in 3 0 a n d 5 0 ton sizes.
Write for Bulletin.
LOOMIS ENGINEERING & MANUFACTURING CO. Dept. A , Route 4 6 , C a l d w e l l , N . J . Circle No. 58 on Readers' Service Card 28 A
·
Objectives of various cooperative programs may differ a great deal. For example, Willits (21) and Wernimont (20) describe contrast ing approaches that have widely different objectives. In one case reported by Willits, a number of laboratories made determinations on two types of samples represent ing two chemical species and two levels of the ingredient (element), using two basic procedures but with variations of their own choice (equivalent to many methods). The primary objective was to select the best method or methods from the ones examined. In a case cited by Wernimont, one sample was tested by eight laboratories using one basic procedure with no choice of variation of techniques or equip ment, and using two operators in each laboratory, each operator per forming two replications of the tests on each of three different days. In this example, the objectives were to measure the precision of the method for one common sample and to assess the laboratory, operator, and day effects. In the Joint ArmyNavy-Air Force Analytical Panel work, the approaches given in both of the references cited above have been used—sometimes first the sur vey approach described by Willits, followed by the more closely con trolled experiments advocated by Wernimont. At times, the design of a single round robin has combined features of both approaches.
ANALYTICAL CHEMISTRY
Samples Every chemist realizes that a test can be no better than the sampling. Yet, faulty samples have ruined many cooperative testing programs. Usually it is desirable that particle size distribution not constitute a variable. Hence, the santple should be uniform, and the particles suffi ciently fine to have no influence on test results. Instructions may need to take into account the possibility of size segregations that might oc cur during shipment. They may need to warn the participants to use the sample as received (omit ting some process, such as grinding, that would ordinarily be used), or to process the sample in some speci fied way before use—e.g., grinding
or drying. Packaging for materials that can either lose volatile ingredi ents or take on moisture during shipment is a critical item. Changes caused by the aging of samples must be considered and di rections given that will eliminate such effects or minimize them.
Chairman's Instructions to Participants The chairman should issue clear instructions covering such impor tant details of interlaboratory test ing as the following: Methods. I t is the chairman's re sponsibility to see that all partici pants are supplied with correct de scriptions of the method (s) to be tested. These descriptions must be uniform for all concerned and as well written as possible. Practice Period. Frequently in interlaboratory testing a new method does not meet expectations with respect to precision as com pared with older methods. I t also happens that sets of replicates by new procedures will need replace ment (alternative) values when a test for rejection of data is applied. Lack of experience with the new method is responsible for some of the variability encountered. Hence, it is desirable to ask participants to practice with the new technique un til some familiarity with it is achieved before making the deter minations to be reported for the in terlaboratory test. If the official samples are in short supply a prac tice sample should be used. Practi cal limitations of time and expense may determine the extent of the practice period according to the op erator's judgment. Moisture Determination. Fre quently also it is essential that the chairman instruct each participant to make a moisture determination on his samples by a prescribed method, and to calculate values for other ingredients to a moisture-free basis. Sample Weights. Sample weights for individual determina tions have sometimes been set by chairmen at an exact number of grams (no variation permitted) in order to avoid errors of recording weighings and to simplify the work
REPORT FOR ANALYTICAL CHEMISTS of checking analytical calculations. These two advantages of uniform spécifie weights are overshadowed by two disadvantages, namely, the introduction of bias (either conscious or unconscious) and the loss of a device for detecting systematic error in the procedure under test (22, p. 40). In the authors' opinion, uniform weights should never be used in cooperative work. It is permissible in most' cases to suggest a small range or two small ranges, one being about double the other. There are cases in which the range of weight must be prescribed by the test method itself. For example, in the determination of heat of explosion, density of loading— i.e., relationship of size of sample to size of the bomb—influences the \7alues obtained and must be held within narrow limits. Data Forms. Forms for the collection of data should be designed, tested by use in the source laboratory, and distributed in triplicate to the participants by the chairman. One form can then be returned to him for his report, and the other two retained for use by the participating laboratory. In addition to spaces for the recording of test data, these forms might also include provision for the noting of the following information: Assignment of operators Operator's time Total elapsed time Deadline date for return of data to chairman Remarks or suggestions on testing procedure Departures from specified procedures If the identities of participants are not to be coupled with their data, the chairman will assign code numbers to be used on data sheets and in his report. Committee D-13 of the American Society for Testing Materials has recommended a standard (although somewhat flexible) form for data collection (2). In the analytical panel work, the forms have varied widely depending upon the type of problem and a standard form has not been feasible. Examples of forms used in the first twelve round robins of the Analytical Panel may be seen in a collective report by Pierson (17).
Rounding Off the Data. Good rules for rounding off data for reports are conveniently available (1, 7, 19, 22) and may be applied by the coordinator of a cooperative test program in the final stages of report preparation. Rules for data collection in cooperative testing are different! The chairman should specify in his instructions exactly how many digits to the right of the decimal he wishes the analysts to report for each test value obtained. He should set this at the level of the digit that represents the last place the analyst can estimate. If the rounding off is done prematurely —at the operator level—there is a high probability that the greater part of the measures of precision will be thrown away, and that efforts put into the work will be completely wasted. Early rounding off is then to be rigorously avoided in interlaboratory work until the statistical evaluations have been made ; it is much better to have too many decimal places than too few (22, p. 7) ! Table I is a simple illustration of the complete loss of statistical measures of precision when one decimal place is dropped prematurely. The values of 18 replicate moisture determinations, 6 for each of three laboratories, were estimated and reported to the third decimal place in the table as shown. Observe that if the data were rounded off to two decimals—the place to which the results are customarily reported when they are to be used only for calculating percentage values of other ingredients to a "dry basis"—every replicate would have exactly the same value, 0.02%, all the means would be 0.02%, standard deviations for all three laboratories would be zero, and no statistical evaluations would be possible. With the values shown to three decimals, precisions obtained by laboratories A and Β are equal and are better than that of laboratory C, and the three means are different. The statistical sig nificance of these differences can be examined by appropriate tests.
1111111111111111111111111 Range 100 cm — Reads to 0.01 mm. Scale engraved in 0.5 mm direct on solid Ws inch cross section bar.
CATHETOMETER
Reference
Page 11 of the Ealing Catalog for complete description of this outstanding instrument. This is just one of the more than 300 instruments for the physical sciences offered exclusively by Ealing.
THE
EALING CORPORATION 35 University Road Cambridge 38 Massachusetts
Statistical Design a Baird Atomic Affiliate
Simple vs. Complex Designs. In Circle No. 33 on Readers' Service Card VOL. 3 1 , NO. 12, DECEMBER 1959
·
29 A
REPORT FOR ANALYTICAL CHEMISTS
OHAUS TRIPLE BEAM BALANCE Model 750-S
$19.15
WEIGHTS
FOR WIDEST SELECTION, GREATEST CAPACITY SPECIFY OHAUS. THERE IS A MODEL TO FIT YOUR EVERY NEED.
NEW BOX END ft!AM High strength, die cast aluminum alloy, with ends cross braced. SLIDING POISE with center indicating panel, insures rapid correct readings. Eliminates secondary beam oscttlations.
ANGLE-VIEW BEAMS stainless steel relief etched graduations for easy reading.
END READING DEVICE p o i n t e d beam registers against graduated dial eliminating parallax error.
ATTACHMENT WEIGHTS extend capacity to 2610 grams with this handy set. List Price $5.00
WHITE FOR FREE BHOCHUBE
OHAUS
SCALE
CORPORATION
1050 COMMERCE AVE. UNION, NEW JERSEY
Circle No. 123 on Readers' Service Card 32 A
·
ANALYTICAL CHEMISTRY
research work, the use of experimental designs that will test a relatively large number of variables in a single-experiment framework will often yield high returns over simple or "classical" designs. In the authors' opinion, designs used for interlaboratory testing should be kept at a relatively simple level. When a number of different laboratories try to follow complex designs (containing too many objectives), some of the participants are likely to become confused and serious errors may be made. Often the desire arises to test a large number of types of samples, each at more than one level of an ingredient, perhaps more than one ingredient, and each ingredient by several test methods. In one research laboratory with good statistical background a single complex design can be advantageous, but for a number of laboratories a better answer might be to split the work into more than one cooperative testing program. Confounding. If a statistical design is such that it is not possible to estimate separately the effects of two (or more) factors, but only their combined effect, these factors are said to be confounded with each other. In other words, the design fails to provide the means for separating the effects of the confounded factors. Sometimes confounding is purposely used (6). Sometimes it inadvertently enters into the design of experimental work and disrupts statistical evaluations. In cooperative test work, care should be taken to avoid such interference. Confounding of time with other variables can often be eliminated by careful randomization of these other variables (samples, methods, replications, etc.) with respect to time. Thus, instead of introducing time as an effect to be measured, one may use randomization to reduce the size of the experiment. Confounding of operator and laboratory effects under the single heading "laboratory" has often been purposely done in the Analytical Panel programs to avoid the expense of multiple operators for each test within each laboratory. If it is desired to compare several methods
for more than one level of ingredient on more than one type of sample with adequate replication in a number of laboratories, adding an operator factor to the many already present may make the entire program unwieldy and prohibitively expensive. The operator factor is then considered the one that can be sacrificed (confounded with laboratory) with least damage to the results. A recent article by Youden (24) advocates simplicity of design within each laboratory and supports the plan of excluding operator as a factor. Unintentional confounding of operators within the various laboratories engaged in a round robin is to be carefully avoided. As indicated by the statement under Data Forms, the chairman should regulate the assignment of operators within laboratories for each group of tests. For example, if two instrumental methods are being compared, the results would not be statistically valid if some laboratories used a single operator for both instruments while others used two operators. Operator would then be confounded with laboratory in the first case and with method in the second. Randomization. It is basic to good statistical evaluation that the design of the experiment include randomization, replication, and symmetry, all in the proper degree. The chief purpose of randomization is to prevent confounding of factors, particularly of time with other factors. Statistical evaluations may be invalid unless randomization is properly applied. The difficulty of getting all participants of a cooperative program to appreciate the importance of randomization provides some ammunition for those who would do away with interlaboratory testing altogether. It is a source of much disappointment to a chairman if some of the participants carry out all the instructions except those for randomization. It cannot be overemphasized, therefore, that the chairman should do everything he can to get the randomization idea across. Indeed, he should never leave the randomization up to the participants, but
• RATE METERS · AMPLIFIERS · MONITORING SYSTEMS » π
should supply each of them with a prescribed sequence and an earnest plea that it be followed exactly. The data form should reflect the desired sequence whenever possible, and should always be constructed in a manner that will make entries easy when the sequence is followed. Sometimes it is impossible to apply a complete randomization scheme, as, for example, when one chemist is the expert on one type of equip ment while another man at the same activity is the logical choice for an other method or piece of equipment. Then a partial or compromise ran domization is all that is practical. Sometimes all the samples for two different operations can be weighed in a random fashion at the same period, thereby eliminating at least the possibility of time effects on the individual portions from the same sample. Replication. Too much replica tion causes a waste of effort; too little fails to give the required sensi tivity. Some guidance on replica tion is given in the American So ciety for Testing Materials publica tion D 1421-56T (3). When some practice has been had with a method and some knowledge exists as to the precision which it can show, then it may be found that only two or three replicates will be sufficient to provide the sensitivity desired in an interlaboratory test. It should be remembered, however, that programs conducted at this low level of replication are without benefit of a good means for detect ing and rejecting faulty data at the operator level as described in the following section. In the work of the Analytical Chemistry Panel mentioned above, five or six replica-
Ο
Ο π Η
η
RAD CELL® ρ •ν
ο Ο 2 π Η η
VI
Ο
>
r
η 3
J>
η S Η χ
> 2
-ο r -π η
7. 2
r
ιτ m
(/!
< π
2 2
>
-Ι
Replicates 1 2 3 4 5 6 Mean, x6 Est. std. dev., s
0.018 0.018 0.017 0.017 0.017 0.016 0.0172
Laboratories Β Moisture, % 0.024 0.024 0.023 0.023 0.023 0.022 0.0232
0.000753 0.000753
ο 3 {/>
Τ)
Ο
C 0.024 0.023 0.020 0.019 0.018 0.016 0.0200 0.00303
for gamma irradiation of solids, liquids, gases continuous temperature control
C
> >
A
Victoreen Model 770 Radcell
!/ϊ
Ο
Illustration of Rounding Off Data
S9BSQ
!/ϊ
3 3
Table I.
VICTOREEN
£ π
3 Φ C TJ
200,000 r/hr or more* Uses for Victoreen's new Radcell high intensity gamma irradiator are virtually unlimited for re search in the fields of chemistry, petroleum and biology. No specially shielded room is required. Exclusive turret loading principle assures operator safety. Source is surrounded by two metallurgically bonded coils: one provides for flow of fluid samples around source, the other provides for flow of refrigerated or heated liquids for precise temperature control of sample. A -ei9A *Us~'ng 1000 curie Cobalt 60 source. Shielding is adequate for 1,000,000 r/hr when loaded with sufficient Cesium 137. Write for your copy of Form 3024-9 "Victoreen Radcell Gamma Irradiator"
-ο Γ η 71
> >Γ" ·