Volume 13, Issue 1 pp. 17-34
Full Access

A feed is only as good as its ingredients – a review of ingredient evaluation strategies for aquaculture feeds

B.D. GLENCROSS

B.D. GLENCROSS

Department of Fisheries – Western Australia, Research Division, North Beach, WA, Australia

Search for more papers by this author
M. BOOTH

M. BOOTH

New South Wales Department of Primary Industries, Port Stephens Fisheries Centre, Nelson Bay, NSW, Australia

Search for more papers by this author
G.L. ALLAN

G.L. ALLAN

New South Wales Department of Primary Industries, Port Stephens Fisheries Centre, Nelson Bay, NSW, Australia

Search for more papers by this author
First published: 11 January 2007
Citations: 549
Dr Brett Glencross, PO Box 20, North Beach, WA 6920, Australia. E-mail: [email protected]

Abstract

The evaluation of feed ingredients is crucial to nutritional research and feed development for aquaculture species. In evaluating ingredients for use in aquaculture feeds, there are several important knowledge components that should be understood to enable the judicious use of a particular ingredient in feed formulation. This includes information on (1) ingredient digestibilities, (2) ingredient palatability and (3) nutrient utilization and interference.

Diet design, feeding strategy, faecal collection method and method of calculation all have important implications on the determination of the digestible value of nutrients from any ingredient. There are several ways in which palatability of ingredients can be assessed, usually based on variable inclusion levels of the ingredient in question in a reference diet and feeding of those diets under an apparent satietal or self-regulating feeding regimes. However, the design of the diets, the parameters of assessment and the feeding regime can all be subject to variation depending on subtleties of the experimental design. Clearly, issues relating to feed intake are the key performance criteria in palatability assessments, and it is important that such experiments maintain sufficient stringency to allow some self-discrimination of the test feeds by the fish. The ability of fish to use nutrients from the test ingredient, or defining factors that interfere with that process, is perhaps the most complex and variable part of the ingredient evaluation process. It is crucial to discriminate effects on feed intake from effects on utilization of nutrients from ingredients (for growth and other metabolic processes). To allow an increased focus on nutrient utilization by the animals, there are several experimental strategies that can be adopted, which are based on variations in diet design and feeding regime used. Other issues such as ingredient functionality, influence on immune status and effects on organoleptic qualities are also important consideration in determining the value of ingredients in aquaculture feed formulations. A key aspect to note is the need to design all experiments with sufficient experimental capacity to detect significant effects.

Introduction

Fish diets of the future will include a wider range of alterative ingredients to fishmeal than is currently the case. Many of these ingredients are more complex than fishmeal and require thorough evaluation in order to determine their nutritional value and appropriate use levels in prospective diets. In the evaluation of specific ingredients, ideally, the science of nutrition should endeavour to gain knowledge on the nutritional implications of use of those ingredients, and once this knowledge is gained then to seek apply it to commercial feed formulation. While it could be stated that there is no single strategy by which ingredient evaluation should be undertaken, there are clearly pitfalls and problems to be wary of in undertaking the evaluation process. This nutritional evaluation process has several key facets that need to be undertaken to provide a clear indication of the potential that any ingredient may have for use in an aquaculture feed.

The fishmeal and oil problem and ingredient risk reduction

Fishmeal has traditionally been considered an important protein source for use in aquaculture diets for both carnivorous and omnivorous species, and many aquaculture formulations still have fishmeal included at levels in excess of 50%. However, being too reliant on any one ingredient presents considerable risk associated with supply, price and quality fluctuations. As a strategy to reduce risk, the identification, development and use of alternatives to fishmeal and oil in aquaculture diets remains a high priority. As a result of the volumes of fishmeal and oil used in aquaculture, especially for carnivorous species, aquaculture of these species is still perceived as a net fish consumer rather than producer, and this practice has raised concerns about the long-term sustainability of these industries (Naylor et al. 2000).

Substantial effort has been expended over the past decades in evaluating a wide range of potential alternatives to fishmeal and fish oils for use in aquaculture diets. Those ingredients can generally be classified into those being derived from either plant origin or terrestrial animal origin.

Plant derived resources include soybean meals, protein concentrates and oils (Kaushik et al. 1995; Refstie et al. 1998, 1999), canola meals, protein concentrates and oils (Higgs et al. 1982; Mwachireya et al. 1999; Burel et al. 2000; Forster et al. 2000; Glencross et al. 2003b, 2004a,b) and lupin meals and protein concentrates (Burel et al. 1998; Booth et al. 2001; Farhangi & Carter 2001; Glencross et al. 2003a, 2004c).

Key potential terrestrial animal ingredients have included resources such as rendered meat meals (Bureau et al. 1999, 2000; Stone et al. 2000; Sugiura et al. 2000; Williams et al. 2003a,b), blood meals (Allan et al. 1999a,b; Bureau et al. 1999) and poultry meals (Bureau et al. 1999; Nengas et al. 1999).

Key evaluation components

We consider that there are several key components in ingredient assessment including ingredient characterization, ingredient digestibility, palatability, nutrient utilization and functionality.

  • 1

    Ingredient characterization is the first part of any evaluation process. Chemical composition, variability in composition, source and species of origin are all important factors that need to be documented so as to allow any meaningful assessment and reporting of that assessment.

  • 2

    Ingredient digestibility is the measurement of the proportion of energy and nutrients, which an animal can obtain from a particular ingredient through its digestive and absorptive processes. Several methods have been used to determine diet and ingredient digestibilities in aquaculture species. This review will examine these methods and explore some of their key strengths and weaknesses.

  • 3

    Determination of ingredient palatability is the second key component of knowledge required about an ingredient before it can be successfully used. Palatability being defined as the combination of both attractiveness and ingestion of a diet and therefore of most relevance to feed development. This is important because, irrespective of how digestible and available the nutrients and energy from an ingredient might be, if the ingredient reduces feed intake then it will have limited value. A range of methods have been used to explore palatability and feed intake issues in aquaculture feeds and a summary of some of these methods used in this aspect of ingredient evaluation is presented in this review.

  • 4

    The determination of nutrient utilization or interference with nutrient utilization because of incorporation of any one ingredient is perhaps the most complex step in the ingredient evaluation process. This complexity is largely related to the wide variety of factors that may impact on nutrient or energy utilization. The methods employed to examine issues relating to nutrient utilization are also diverse and this review highlights some of the problems that occur with their use.

  • 5

    Ingredient functionality is another crucial aspect of ingredient evaluation. Irrespective of the compositional or nutritional attributes of an ingredient, if it cannot be functionally introduced into a feed in a manner that allows its processing in a suitable manner then it is of diminished value as a feed ingredient. Alternatively, some ingredients may add additional value to a diet, based on some functionality features that they contribute to a formulation. This is particularly the case with modern extruded feeds.

Characterization and preparation of ingredients

Detailed compositional information on test samples of all ingredients to be evaluated is critical. High level of variability between common ingredients is well recognized and this variability can affect the nutritional value of the ingredient and determination of the best strategies to determine the nutritional value of the ingredient (Jiang 2001). As variability can exist within ingredient samples, it is also important that samples are adequately mixed to ensure that what is ultimately evaluated is representative. The preparation of diets and ingredients is also important, as is their long-term storage if they are to provide useful information over extended periods of time.

Characterizing ingredients

A key reason for comprehensively characterizing ingredients is so others can use the findings from the study. Simple identification factors such as the species of origin and whether a genotype or cultivar classification exists (if relevant) should be detailed. Differences between production site, season or year can also be important. For example, within the lupin species of Lupinus angustifolius, there are more than a dozen different commercial cultivars being grown and there is substantial variation in key nutrient parameters such as protein, amino acid and energy content among those cultivars (Table 1). The implications of such variability have been noted. For example, Glencross et al. (2003c) found that there was substantial protein and energy digestibility variation among cultivars.

Table 1. Composition variability within the kernel meals of some narrow-leaf lupin, Lupinus angustifolius, cultivars
Cultivar type Gungurru Tanjil Kalya Merrit Warrah Myallie
Dry matter 873 907 899 903 911 912
Crude protein (N × 6.25) 397 402 429 459 423 469
Phosphorus 4 5 4 4 6 5
Ash 33 33 31 31 33 37
Crude fat 59 58 55 66 56 57
Energy (MJ kg−1 DM) 20.2 20.6 20.7 21.1 20.0 20.4
Arginine 44 47 49 39 45 47
Histidine 12 10 12 10 11 10
Isoleucine 16 17 18 14 17 16
Leucine 28 30 30 23 32 29
Lysine 18 16 20 16 14 12
Methionine 3 2 3 2 5 2
Phenylalanine 16 17 17 14 18 16
Threonine 14 17 15 12 17 16
Valine 16 17 17 14 17 16
  • All values are g kg−1 DM unless otherwise stated. Data from Glencross (unpublished).

The source of the ingredient may also be important. For example, significant problems relating to antinutritional factors (ANF) in canola and rapeseed meals were identified from Canadian-produced meal in studies by Higgs et al. (1982, 1983) and Mwachireya et al. (1999) and in European-produced meals by Burel et al. (2001). However, Glencross et al. (2004a,b) found no significant problems attributable to ANF in Australian-produced rapeseed meals. Identification of the origins of the ingredient being studied at least to country should be considered as a minimum. This is especially important for plant ingredients, as it is well known that soil type and climate can affect the nutritional composition of many grains.

The nature of processing of the ingredient sample prior to addition to the experimental diets also has important implications. Booth et al. (2001) noted clear differences in the chemical composition and also the nutritional value of a range of grain legume meals produced from either whole-seed or seed kernels. Similarly, clear differences in rapeseed meal produced through different oil extraction methods have been noted on not only the composition of the meal, but also on the nutritional value of the respective meals (Glencross et al. 2004a,b). Accordingly, some characterization of the processing methods/technologies used to produce the ingredient from its raw or natural state would be useful. It is well known that protein damage sustained during ingredient or diet processing will affect the value of an ingredient (Peres et al. 2003). For example, heat damage to canola meals affects their usefulness when fed to fish (Glencross et al. 2004a), and it is well known that nutritional loss of some amino acids occurs through such heat damage and the influence of protein, carbohydrates and moisture on Malliard reactions (Oste 1984; Anderson et al. 1993). There are potential in vitro assays that can be employed to characterize such protein damage (e.g. reactive lysine assay) (Rutherfurd et al. 1997).

In addition to the clear identification of the ingredient of concern, its origins and processing, a detailed analysis of the compositional characteristics should be provided. Ideally, this analysis should be as comprehensive as possible, but key variables such as crude protein (nitrogen × 6.25), total lipids, ash, moisture and gross energy should be considered mandatory for all test ingredients and preferably all ingredients used in any experimental diets. Crude fibre is one traditional element of proximate analysis, which is losing favour to more useful analyses such as acid-detergent fibre and neutral-detergent fibre, which relate more closely to levels of cellulose, hemicellulose and lignins (Petterson et al. 1999). A more comprehensive guide to key compositional parameters to be considered is provided in Table 2. In the absence of clear information on the ingredient species, cultivar, origin and processing level, the importance of the compositional analysis increases. As many modern aquaculture feeds are now being formulated on a digestible amino acid basis, the importance of ingredient amino acid composition is important (Sorenson et al. 2002).

Table 2. Typical composition of commonly used aquaculture feed ingredients (values are g kg−1 DM unless otherwise detailed)
Nutrient 1Fishmeal 2Yellow lupin NL lupin LPC Expeller canola meal SE canola meal SE soybean meal 1SPC
Dry matter content (g kg−1) 917 903 885 942 898 962 909 939
Crude protein 770 547 415 690 381 431 518 590
Total lipids 68 87 53 93 136 22 47 54
Ash 142 44 33 31 66 86 69 79
Phosphorus 22 6 4 5 24 23 8 9
Gross energy (MJ kg−1 DM) 21.3 20.9 20.4 22.2 23.1 19.6 19.6 20.3
Arginine 43 61 47 78 39 32 42 45
Histidine 25 15 10 15 28 26 14 15
Isoleucine 28 20 15 27 3 3 23 26
Leucine 55 45 29 51 28 25 44 48
Lysine 46 23 14 25 46 41 28 28
Methionine 21 4 3 5 37 30 9 9
Phenylalanine 29 21 16 28 29 27 27 30
Threonine 32 20 16 23 18 16 24 25
Valine 34 19 14 23 66 78 24 27
  • NL lupin, narrow-leaf lupin Lupinus angustifolius (mixed cultivars) kernel meal; LPC, Lupinus angustifolius (mixed cultivars) protein concentrate; SE, solvent-extracted. SPC, soybean protein concentrate; HP300, Hamlet-protein, Horsens, Denmark; EHC, enzymatically hydrolyzed casein. Data from Glencross (unpublished).
  • 1 Chilean anchovetta meal. 2Lupinus luteus (cv. Wodjil) kernel meal.

Details of ingredient composition should include moisture content but other variables should be expressed on a g kg−1 dry matter basis, to help standardize the ingredient information. This is because most feeds are prepared with the addition of water to ingredients followed by a drying process, which dehydrates them to a relatively uniform dry matter content. Therefore, it is more practical to provide a standardized assessment of composition, such as that on a dry matter basis. The moisture content of the original raw material is important when considering ingredient storage and in the feed manufacturing process, but as pointed out will not necessarily have an important bearing on the final moisture content of the feed.

The methods used for composition analysis should be consistent with those recommended by the Association of Official Analytical Chemists (Association of Official Analytical Chemists (AOAC) 1993). Specific recommendations pertaining to the evaluation of grain products were made by Petterson et al. (1999) who identified several opportunities for improvement or modifications to methods that made them more suitable for grain products.

The presence and concentration of ANF or bioactive compounds such as protease inhibitors, saponins, glucosinolates and alkaloids have important implications for potential ingredient use and as such form an important part of the characterization of the ingredient. This too must be recorded, as this information can influence the strategy used to evaluate the potential for use of an ingredient, or even whether it is worth pursuing at all. For a comprehensive review on ANF, refer Francis et al. (2001). Availability and price are key determinants of the potential for an ingredient to be used. When characterizing ingredient availability, volume, time of availability (e.g. is it available every day or week or is it only available during one season each year), source (e.g. is it available everywhere or only in a particular city or region) and accessibility (i.e. supply chain mechanism) are all important. Ingredient pricing is obviously critical, and it is important to explain what drives price change. Weather (especially droughts) will affect supply and price of grains and other agricultural products, as will the changing patterns in ingredient use for other purposes.

Ingredient preparation prior to evaluation

To ensure that any assessment of feed ingredients is undertaken on representative samples, it is important that due consideration is given to the physical preparation of all the feed ingredients with respect to particle size. Fine grinding (200–300 μm) is important to ensure homogeneity in the finished diet. This extends beyond the test ingredients to all those used in any experimental diets. Particle size has been implicated as an important factor in affecting the ingredient evaluation process (Kaushik 2001; Nir & Ptichi 2001). A recommendation of 250 μm for maximum particle size was made by the National Research Council (NRC 1993). Application of this recommendation has not been widely adopted with a maximum particle size of 600–800 μm being more typical (Burel et al. 2000; Glencross & Hawkins 2004). Most studies provide little indication of particle size for any of the ingredients used.

The sample size required will vary according to many factors and every effort should be made to ensure homogeneity of the batch used, irrespective of any samples taken for analysis (Jiang 2001). Principally, a large enough sample or a number of replicate samples should be taken to account for prospective variation within a batch of an ingredient. The specific sample size required, however, will vary according to many factors (Jiang 2001). Notably, any intrasample variability can be largely minimized by thorough mixing of the ingredient sample prior to allocation to any diet and should be considered routine practice in experimental diet preparation.

Evaluation of ingredient digestibility

Modern aquaculture diets are routinely formulated based on the digestible nutrient and energy criteria (Cho & Kaushik 1990). Measuring digestible energy and digestibility of ingredients and diets simply means measuring that amount of the energy or nutrient that is not excreted in faeces. Energy or nutrients in faeces are clearly unavailable for maintenance or growth and represent one of the major ‘losses’ from intake to tissue growth.

In assessing diet digestibilities, the two key methodological approaches are the direct and indirect assessment methods (Maynard & Loosli 1969). In the direct assessment method, a complete account of both feed inputs and faecal outputs is required. The digestible value of the feeds is then determined on a mass-balance basis. Unfortunately, this method is fraught with problems, largely because of the difficulty and errors involved with collection of accurate data on feed intake and faecal production. Indirect assessment is the alternative. Here, a representative sample of both the feed and the faeces is required and an indigestible marker is added to the diet. The ratio of the marker in the feed and faeces determines dry matter digestibility and is used to calculate digestibility of energy and other nutrients. Indirect assessment gives ‘apparent digestibility’.

Feed issues in ingredient digestibility assessment

Ideally, assessments of the digestibility of ingredients would be made by feeding single ingredients. However, this is rarely possible, because aquatic animals will refuse to eat many diets comprising a single ingredient. Instead, an ingredient substitution approach is usually used, where test diets comprise the test ingredient plus a reference diet component.

The reference diets used for digestibility studies with most aquaculture species have usually been simple, practical diets. Typically, these diets have been based on fishmeal as a key protein and energy source, although other ingredients have also been routinely used, such as soybean meals and wheat flours (Cho & Slinger 1979; Gomes et al. 1995; Glencross & Hawkins 2004). While reference diets are usually formulated to meet requirements for energy and other nutrients, it is acknowledged that diet composition can influence ingredient digestibility (Lupatsch et al. 1997). To address this issue, some attempts at formulating test diets to equivalent protein levels through the complementary substitution of an additional ingredient have been made (Glencross et al. 2003c).

Essentially, there are two methods of ingredient inclusion for specific ingredient digestibility assessment. These are usually referred to as the diet replacement method (DRM) or the ingredient replacement method (IRM) (Aksnes et al. 1996). With the DRM method, a test ingredient is added to replace a portion of the reference diet to create a test diet. The digestibility values for both the reference and test diets are then determined and, based on proportionality factors, the digestibility of the ingredient, or any of its nutrients can be calculated. It is important to note that, with this method, the portion of the reference diet within any test diet must be fully representative of the complete reference diet. For example, all ingredients, including additives and the marker, must be included in the portion of the reference diet used and not added to the test diet at equivalent levels as those in the reference diet.

The IRM also uses a reference diet, but differs in that the reference diet usually has a single, well-defined reference ingredient at a fixed, moderately high inclusion level (Aksnes et al. 1996). This single ingredient is then replaced with the test ingredients to create the test diets. The assessment of the digestibility of any ingredient is then based on the relative diet digestibility with regard to the reference ingredient. With this method, the basis of the digestible value of the test ingredient is largely dependent on the choice of the reference ingredient and its assigned or measured digestibility values (Morales et al. 1994). Furthermore, with the choice of a reference ingredient as one of the test ingredients in the DRM method, it effectively becomes possible to capture the strengths of both methods (Glencross & Hawkins 2004; Glencross et al. 2004d).

The amount of the test ingredient that is included into a test diet also has important implications for the rigor of the digestibility assessment made (Smith & Tabrett 2004). Typically, it is more reliable where there is a larger contribution to the diet made by the test ingredient. It is useful to evaluate ingredients at levels more typical to those used in practical diets and a 20–40% inclusion level on an as-received basis is common (Gomes et al. 1995; Allan et al. 1999a). There are also potential benefits from examining the inclusion of a particular ingredient at more than one inclusion level, as it allows the examination of potential interactive effects of ingredients within a feed formulation (Allan et al. 1999a,b). This is especially important for carbohydrate sources where effects of inclusive content on digestibility tend to be greater than for protein on lipid sources.

A wide variety of marker types have been used in aquaculture nutrition digestibility studies. While chromic oxide (Cr2O3) is perhaps the most commonly used marker, rare earth metal oxides such as ytterbium oxide, yttrium oxide and other rare earth metal oxides are gaining favour (Austreng 1978; Ringo 1995; Austreng et al. 2000). For studies focusing on lipid utilization, hydrocarbon markers such as cholestane have proved useful (Carter et al. 2003). While endogenous markers such as acid-insoluble ash and crude fibre have been used, they are somewhat less reliable and more prone to producing data with larger variance (Morales et al. 1999).

Collecting faeces for digestibility assessment

The method of ‘faecal’ collection used in aquaculture nutrition research has been well debated. Essentially, there are three methods adopted by most researchers; dissection, stripping and collection of voided faeces. Where digesta (not called faeces until actually excreted) are collected by dissection or stripping, there is the potential to underestimate digestibility because of incomplete digestion and potential contamination of digesta with endogenous material. In contrast, when faeces are collected from the water column or following settlement, there is the potential to overestimate digestibility because of leaching losses of organic matter.

Early studies by Austreng (1978) examined the changes in diet digestibility when assessments were made from digesta collected from different parts along the gastrointestinal tract (GIT) of rainbow trout using a dissection approach. In that study, substantial increases in digestibility were noted throughout the GIT except between the proximal and distal intestine. Austreng (1978) argued that this was supportive of the use of faecal stripping techniques, where gentle abdominal pressure is applied to the abdomen of the fish, approximately over the distal intestine, to expel its faecal contents. It should be noted, however, that it is not always possible to collect faeces from all species of fish, especially juveniles, using stripping and it is impossible for crustaceans.

Cho & Slinger (1979) collected rainbow trout faeces after defaecation using a steep-sided conical tank with the faeces collecting in a small settling chamber. Allan et al. (1999a) adopted a similar approach to collect faeces from silver perch, Bidyanus bidyanus, but used steeply sloped cylindroconical tanks with a terminal collection chamber. Choubert et al. (1982) developed a modification to the settlement approach of Cho & Slinger (1979), which involved the settled faeces being removed from the water onto a moving screen before being deposited onto a collection tray.

A study by Vandenberg & de la Noue (2001) compared the influences of three faecal collection methods (Austreng 1978; Cho & Slinger 1979; Choubert et al. 1982) of the digestibility of a practical diet containing a range of protein sources (e.g. fishmeal, soybean meal, whey, blood meal) when fed to rainbow trout. The findings of that study suggested that there was essentially no difference in diet digestibility assessments between the methods of Cho & Slinger (1979) and Choubert et al. (1982), but that both of these settlement methods resulted in significantly different (higher) diet digestibilities than those determined from the faecal stripping collection method.

Glencross et al. (2005) compared the digestibilities of a series of ingredients when faeces were collected using either settlement (Cho & Slinger 1979) or stripping techniques (Austreng 1978) (Table 3). Significant differences were observed between the two faecal collection methods as to their effects on ingredient digestibility. Notably, the effect was more pronounced on ingredients high in indigestible carbohydrates. Faecal stripping provides a more conservative estimate of both diet and ingredient digestibilities than that provided using settlement techniques.

Table 3. Ingredient digestibility when compared using rainbow trout, but different collection methods (data derived from Glencross et al. 2004e)
Fish meal Narrow-leaf lupin LPC LPI Soybean meal SPC EHC Pooled SEM
Stripping
 Organic matter a93.1a a44.6c a70.7b a87.6a a61.0b a67.2b a89.1a 2.95
 Phosphorus a35.1de a346.0a a138.5bc a120.9cd a27.7e a76.3d a92.3cd 14.15
 Energy a99.0a a53.1c a84.2ab a91.3ab a72.1bc a87.3ab a91.5ab 2.94
 Nitrogen/protein a87.5b a85.3b a98.4a a95.1ab a92.1b a97.9a a92.2b 1.21
Settlement
 Organic matter a94.5ab b64.8c a76.7c a94.8a b77.3c b82.0c a98.5a 1.62
 Phosphorus a36.2d b272.2a b87.2c b71.7c a56.7cd a58.9cd a85.4c 10.73
 Energy a96.4ab b70.5e a86.6cd a93.8ab a83.3d a85.6d a98.8a 1.27
 Nitrogen/protein a89.3d b97.2c a101.0b a98.6bc a99.0bc b106.9a a96.0c 0.67
  • Different presuperscripts within columns indicate significant differences between mean values of collection method, but within nutrients and ingredients (P < 0.05). Different postsuperscripts within rows indicate significant differences between mean values of ingredients, but within collection method and nutrients (P < 0.05).

The duration of the faecal collection period is usually largely dependent on obtaining sufficient sample to be able to undertake the required chemical analyses although in general, a longer period of collection e.g. >5 days will minimize variability in faeces composition and improve reliability of results; although arguably this can also be resolved using a larger number of fish from which to collect faecal samples. Reduction in variance has the potential to substantially improve the experimental power and capacity to detect significant effects (Searcy-Bernal 1995).

Experiment management issues in ingredient digestibility assessment

As with any experimental strategy, management of the operational conditions is critical. The experimental conditions used can affect digestibility and need to be managed accordingly. Key considerations include environmental conditions, fish size and feeding ration structure.

Environmental conditions, notably water temperature, seem to have minor effects on digestibility; Windell et al. (1978) noted little influence of water temperature (7, 11 and 15 °C) on dry matter, protein, lipid, carbohydrate or energy digestibility of a diet fed to rainbow trout of three size classes (19, 207 and 585 g), except for the lowest temperature and the smallest fish. Conversely, substantial differences were noted in the digestibility of starch of varying levels of gelatinization between rainbow trout (Oncorhynchys mykiss) held at either 8 °C or 18 °C (Kaushik 2001) and Kim et al. (1998) reported temperature effects on lipid and energy digestibility for common carp (Cyprinus carpio) at 18 and 25 °C. Ideally, conducting experiments in thermal regimes that are optimal for each species is the most desired regime, as it will maximize feed intake and also faecal/digesta sample collection.

Feed allocation or ration size has been shown to influence digestibility assessment, but only at the highest feeding rates (Windell et al. 1978). In that study, rainbow trout were fed varying rations of 0.4%, 0.8% and 1.6% of live weight per day, with fish fed the highest feed ration, yielding significantly lower digestibility values for dry matter, carbohydrate and energy, but not for protein or lipid. It could be argued that restricted pair-feeding is the most suitable approach for standardizing digestibility experiments because of this, but practicality and the need for application of the data to commercial conditions mean that feeding to apparent satiety is a more useful strategy.

Substantial variability has been noted in the digestibilities of diets during the period when fish are first fed a new diet (Wybourne & Carter 1999). Because of this, a period of acclimation to diets prior to collection has been suggested and adopted by most researchers (Allan et al. 1999a; Burel et al. 2000). The length of this period varies among researchers, although the data from Wybourne & Carter (1999) suggest that, as there was sufficient reduction in variability of digestibility assessments of most diets after 4 days, that collection should be commenced from day 5. For an additional degree of conservatism, most researchers use a minimum of 7 days acclimation (Allan et al. 1999b; Sorenson et al. 2002).

Calculating diet and ingredient digestibilities

The calculation of diet and subsequently ingredient digestibilities are largely derived from methods developed from terrestrial animal nutrition science (Maynard & Loosli 1969). The use of digestibility assessments for aquaculture was first adapted by Cho & Slinger (1979). These researchers developed much of the original methodology for faecal collection based on the ‘Guelph-style’ faecal settlement tank system and adopted the calculation methods being used by the terrestrial nutrition sectors. The calculation allows the determination of the apparent digestibility, based on the ratio of marker in the diet and faeces. The apparent digestibility coefficient (ADCdiet) of each specific nutritional variable is based on Equation 1
image()
In this equation, the terms Markerdiet and Markerfaeces represent the marker content of the diet and faeces, respectively, and Nutrientdiet and Nutrientfaeces represent the nutritional parameter of concern (e.g. protein or energy) in the diet and faeces, respectively. With this formula, values would typically range from 0 to 1. To achieve a percent apparent digestibility, the equation should be multiplied by 100. To calculate the digestibility of the test ingredient, Cho & Slinger (1979) used
image()
In Equation 2, the Nutr.ADingredient is the digestibility of a given nutrient (or energy) from the test ingredient included in the test diet at 30%. ADNtest is the apparent digestibility of the nutrient of interest in the test diet. ADNbasal is the apparent digestibility of the same nutrient from the basal diet, which makes up 70% of the test diet. A progression of this equation was reported by Sugiura et al. (1998) who used
image()
In Equation 3, the Nutr.ADingredient is the digestibility of a given nutrient from the test ingredient included in the test diet at 30%. ADtest is the apparent digestibility of the test diet. ADbasal is the apparent digestibility of the basal diet, which makes up 70% of the test diet. Nutringredient, Nutrtest and Nutrbasal are the levels of the nutrient of interest in the ingredient, test diet and basal diet, respectively (Sugiura et al. 1998). A similar equation (Equation 4), also based on the nutrient contribution of the test ingredient to the digestibility assessment, was proposed by Forster (1999).
image()

In Equation 4, Nutr.ADingredient is the digestibility of a given nutrient from the test ingredient included in the test diet at 30%. ADtest is the apparent digestibility of the test diet. ADbasal is the apparent digestibility of the basal diet, which makes up 70% of the test diet. Nutringredient, Nutrtest and Nutrbasal are the levels of the nutrient of interest in the ingredient, test diet and basal diet, respectively (Forster 1999). Essentially, this equation is the same as that of Sugiura et al. (1998). More recently, prudent amendments have been suggested by Bureau (2006) to ensure that any differences in dry matter content of the reference diets and test ingredient are accounted for.

Of the last three equations, the latter two are the more appropriate ones for determining ingredient digestibilities, because they account for relative contribution from test ingredient and reference diet to energy or nutrient digestibility being estimated. For example, if an ingredient with 90% protein is added to a reference diet at a ratio of 30% : 70% and the reference diet contains 50% protein, then the added test ingredient actually contributes 270 g kg−1 of protein to the total protein of the diet (>40% of total dietary protein) and the reference diet contributes 350 g kg−1 of protein to the test diet. This is substantially more than the 30% indicated by the direct proportion of ingredient substituted.

Assumptions inherent in use of digestibility data

There are several key assumptions in the digestibility assessment process.

The first key assumption is that digestible coefficients are additive, which is, if you sum the proportional digestibility values for each ingredient in a diet, then it will equal the measured digestibility of the diet. This assumes that there are no interactions among ingredients that differentially affect digestibility, and that changing the inclusion content of a particular ingredient does not change its digestibility either. Neither of these assumptions holds true all the time. Several studies using a range of practical ingredients have been used to validate the assumption of additivity [Cho et al. (1982) with rainbow trout, Watanabe et al. (1996) with carp, rainbow trout, tilapia and ayu, Wilson & Poe (1985) with channel catfish and Allan et al. (1999a,b) with silver perch]. However, it has been shown conclusively that digestibility of some ingredients, particularly ingredients dominated with carbohydrate sources, varies considerably with inclusive content (Stone et al. 2003). It is recommended, therefore, that digestibility of carbohydrate sources is assured at several inclusion levels to improve the robustness of the assessment of digestibility of such ingredients.

A second key assumption is that the marker used to calculate apparent digestibility is inert, that is it passes through the digestible tract without influencing digestion at approximately the same rate as digesta. Chromic oxide, perhaps the most commonly used marker, has been claimed to affect carbohydrate digestibility (Shiau & Liang 1995) but effects are relatively minor and probably do not reduce the value of digestibility results calculated with that marker (Ng & Wilson 1997) and did not affect carbohydrate digestibility with gilthead sea bream (Fernandez et al. 1999).

A third assumption is that digestibility coefficients will fall between 0% and 100%. Sometimes this is not the case. Possible causes are analytical errors for markers or nutrients, poor mixing of the marker in the diet or ‘non-representative’ samples of diet or faeces or, as mentioned earlier, interaction between ingredients. Even when all possible sources of error are checked and minimized, digestibility coefficients above 100 or below 0 do occur (Table 4). We recommend that these values be reported but rounded to 0 or 100 when used to formulate diets on digestible energy or nutrient basis.

Table 4. Typical representation of ingredient digestibilities and digestible nutrient levels. Note that those digestibility values of over 100% have been rationalized to absolute (100%) digestibility for calculation of digestible nutrient levels in each ingredient. (Data derived from Glencross et al. 2004d)
Nutrient Fishmeal Narrow-leaf lupin APC API Soybean meal SPC EHC
Digestibility
 Organic matter 93.1 44.6 70.7 87.6 61.0 67.2 89.1
 Phosphorus 35.1 346.0 138.5 120.9 27.7 76.3 92.3
 Energy 99.0 53.1 84.2 91.3 72.1 87.3 91.5
 Nitrogen/Protein 87.5 85.3 98.4 95.1 92.1 97.9 92.2
Digestible nutrient
 Organic matter 799 431 685 850 568 619 828
 Phosphorus 8 4 5 5 2 7 8
 Energy 21.1 10.8 18.7 20.6 14.1 17.7 19.4
 Protein 673 354 679 770 477 578 774
  • APC, lupin protein concentrate; API, lupin protein isolate; SPC, soybean protein concentrate; EHC, enzymatically hydrolysed casein.

A fourth assumption is that faeces collected will be ‘representative’ of those excreted over time. Allan et al. (1999a,b) showed that digestibility coefficients varied when taken from faeces collected at different periods after the last meal. To obtain a representative sample, faeces must either be collected over a sufficiently long period after feeding ceases. For stripping (or dissection) faeces (digested) need to either be obtained from different animals at different periods after feeding ceases or animals should be fed continuously and digesta sampled at a single time.

Use of a standard reference ingredient in each digestibility experiment provides a useful way to standardize and assist in the assessment of the temporal and intralaboratory variance. Ingredients such as vitamin-free casein, enzymatically hydrolysed casein and wheat gluten have been used for this purpose (Morales et al. 1994; Glencross & Hawkins 2004; Glencross et al. 2005 ) to help standardize digestibility assessments over time.

Ingredient palatability assessment

Formally, palatability is defined as acceptable to the taste or sufficiently agreeable in flavour to be eaten. While it may be difficult to ascertain whether or not a fish ‘likes’ some flavour or not, it is certainly possible to determine differences in the amounts of feed eaten. It is this context that we refer to palatability of a feed and by inference an ingredient. This is important because, irrespective of how digestible the nutrients and energy from a particular ingredient are, if the ingredient reduces feed intake it is of limited use in a feed formulation.

While there may be strategies to avert or resolve palatability issues of feed ingredients using ingredient processing or feeding stimulants, clearly it is the best if these can be avoided. For research to be clearly categorized as nutritional research, it primarily has to be based on the ingestion of nutrients by an organism; therefore one of the key assessment criteria of good nutritional research should be some demonstration of the level of food intake by the target organism. Such an assessment then allows some manner of measurement of a response relative to that feed intake. However, assessing palatability, particularly for aquatic animals, is not necessarily straightforward. This topic is a large issue in its own right and therefore will only be summarily examined here. For further details, examination of the reviews and texts by Jobling et al. (1995, 2001) and Ruohonen et al. (2001) are suggested.

For an animal to demonstrate a feed intake response, it must be given the opportunity to refuse feed, therefore feeding beyond apparent satiety is an imperative. Regardless of whether an ingredient affects attractiveness in palatability, its effect on feed intake must be assured independently of effects on utilization of energy and nutrients. Feed preference studies are one way of assessing effects on intake. The use of self-feeding through computer-managed feedback response mechanisms is another option that has been frequently used to allow discrimination of feeds by fish and certainly assists in removing human error from the feed intake assessment process (Juell 1991; Boujard & Le Gouvello 1997; Burel et al. 1997).

One of the key aspects to examine is variability in feed intake over time. Adaptation to diets has been noted with some species (Wybourne & Carter 1999). To enable such examination in variability, an assessment of daily feed intake of individual replicates is advantageous.

Ingredient inclusion trials are probably the simplest way to examine effects on feed intake. An ingredient can be included in a reference diet to create a test diet and then the reference and test diets are fed to apparent satiety to replicate groups of fish for each diet. Significant differences in feed intake between the reference and test diets reflect the apparent palatability because of the test ingredient. The issue of how much ingredient to include in the test diets is somewhat subjective. Ideally, a range of test ingredient inclusion levels that cover what would be the practical inclusion levels should be used, as this also allows examination of critical palatability levels or break points (Shearer 2000). Unfortunately, one of the often-identified limitations with such experimental designs is both the sensitivity and capacity for the experiments to actually detect significant differences. This highlights the need for stringent experimental design, sufficient experimental power and the appropriate use of control treatments (Ruohonen et al. 2001).

The additional use of positive control treatments, such as those with palatability inhibitors like sulfamerazine sodium (Boujard & Le Gouvello 1997), provides an added degree of confidence to experiments examining intake effects. Notably, many experiments designed to examine serial inclusion effects of a particular ingredient end up with no significant effects. While this can be used to argue that the ingredient is palatable to the test animal up to the inclusion level used, it is often difficult to determine the degree of confidence in such results without positive controls designed to demonstrate a specific effect, such as a decrease in feed palatability.

Nutrient utilization value or interference

Once the variables of digestibility and palatability of an ingredient have been defined, the remaining key issue to resolve is based on the capacity of the animal to utilize the digested nutrients for growth. There can be many aspects to this issue depending on the nature of the ingredient. For example, problems associated with metabolic modifiers, such as glucosinolates, are one such issue that will often not be identified through digestibility or palatability studies per se and are more suited to a study that defines the specific nature of the problem through controlling for digestibility- and palatability-related variability. In addition, subtle effects of differences in things such as amino acid composition and availability are often lost in the ‘noise’ of experimental variance, if appropriate controls are not employed.

Measuring growth

The initial weight and size variability of the animals used in the study has an important bearing on the capacity of a study to determine significant effects. Studies should be conducted on animals of an appropriate size for the intended application technology. For instance, there is limited value in conducting alternative ingredient utilization evaluation on larval or juvenile fish with the intention of the data for grow-out production implications (Windell et al. 1978).

With most nutrient utilization studies, the response variable is growth. Accordingly, this is often simply defined as the difference between initial and final live weights. More specifically, this should be defined as live-weight gain. As such the live-weight gain of the animals from each replicate should be determined, using the same equipment as that used to determine their initial weights, and averaged with data from other replicates to form a treatment final weight assessment. Live-weight gain is also often reported as percentage gain, which is usually expressed as a percentage of the final weight divided by the initial weight. For such measures as this, it is imperative that replicate specific initial weights are used in any statistical analysis such as a covariance.

Technically, for a measure to be a rate it has to be time specific. The three most routinely used growth rate assessments are daily gain (DG), daily growth coefficient (DGC) and specific growth rate (SGR). DG is perhaps the simplest of the three rates and is merely the live-weight gain over time and is often given in units of g day−1. It is also perhaps the most practical of all growth rate assessments in that it provides information in a more tangible assessment. DGC in contrast is calculated based on a percentage of the one-third root transformation of the final (Wf) and initial (Wi) live weights over time (t) (Equation 5) (Kaushik 1998):
image()
The SGR is calculated based on the percentage of the natural logarithm transformation of the final (Wf) and initial (Wi) live weights over time (t) (Equation 6) (Kaushik 1998):
image()
A further growth rate parameter gaining use is the thermal growth coefficient (TGC), which is derived from the DGC, but the time component is expanded to be considered on a temperature basis. In this regard, the time component of the TGC is multiplied by the average temperature (°C) over the period of the study (t) (Cho & Bureau 1998) (Equation 7):
image()

The point of using a growth rate descriptor is essentially to try and standardize the assessment and allow for some comparability of performance across experiments. To achieve this, such a rate should ideally be independent of size. Kaushik (1998) reviewed the applicability of both DGC and SGR for growth assessment of a variety of non-salmonid species and noted that SGR did not provide a very good transformation of growth rates when compared with that achieved using DGC. It was noted that an inverse logarithmic relationship was apparent with the SGR transformation, but that DGC provided a more uniform rate across the entire fish live-weight range studied. In this regard, if such a growth rate descriptor is required, then DGC is perhaps more appropriate than SGR, but if the initial weights of the animals are provided, then gain per day is perhaps just as, if not more practical.

Survival

Stock losses sustained during an experiment are usually presented as a percentage survival. This is determined based on the percentage basis of the number of individuals surviving at the end of a study relative to the number included in the study at the beginning. Unless the percentage is divided by the time of the experiment, survival should not be reported as a rate.

Feed intake and conversion efficiency

For an assessment to be made on the nutrient utilization of a diet, and by reference an ingredient, there is a clear need to measure feed intake. Feed intake by fish is often reported as both an amount (g fish−1) and rate (g fish−1 day−1). However, accurate assessment of feed intake by fish is one of the more difficult aspects of aquaculture nutrition research to achieve.

The efficiency of food use by fish is usually reported as either feed conversion efficiency (FCE: Equation 8) or feed conversion ratio (FCR: Equation 9). These assessments are usually made on a dry weight of food and live weight of fish basis. Because these variables rely on both live-weight gain and feed intake assessment, they assume the errors of both assessments.
image()
image()

Nutrient retention

The efficiency by which nutrients and energy are retained from feeds provides a useful assessment of the efficiency of nutrient utilization from diets (Cho & Kaushik 1990; Booth & Allan 2003; Glencross et al. 2004b). To determine this, assessment of the nutrient and energy composition of both the feed and the fish is required on as fed and live-weight basis, respectively. However, such data can be strongly influenced by animal size, with smaller animals typically being far more efficient at retaining both nutrients and energy than larger fish of the same species (Lupatsch et al. 2002).

Apparent biological value (ABV) is another parameter often quoted and is largely a derivation of nutrient and energy retention values (Morales et al. 1994). Typically, ABV is the retention percentage but on a digestible nutrient basis. In this regard, it provides some assessment of the proportion of the nutrients or energy absorbed from the diet that is actually used for somatic growth. Clearly, for this parameter to be estimated, an assessment of the diet nutrient and energy digestibilities is required.

Additional response variables

The range of response variables that can be examined is quite substantial and increasing. Clearly, there are some variables that are critical for an objective assessment of the performance of fish fed specific diets. However, there are additional variables that can provide useful additional information and are worth considering in the appropriate circumstances, depending on the specific objectives of the study.

Biochemical alternatives, such as changes in blood glucose levels or thyroid hormone levels (triiodothyronine and thyroxine), or enzyme activities have also been used to provide an indication of disturbance by ANF to the metabolic function and nutrient utilization by fish (Burel et al. 2001). Biochemical factors such as homeostasis hormones (e.g. thyroxine and triiodothyronine) have the potential to be sensitive indicators of disruption to metabolic function and nutrient utilization by fish. Other hormone assays used more recently include insulin-like growth factor 1 (Dyer et al. 2004). More recently, the examination of protein expression using two-dimensional electrophoresis has allowed the assessment of the influences of specific diet types on hepatic metabolism (Vilhelmsson et al. 2004). Studies such as these are significantly advancing the understanding of specific ingredient effects on functional metabolism.

Effects of dietary treatments on whole somatic or organ-specific composition are another means of assessment of potential ingredient effects (Ruyter et al. 2000; Booth & Allan 2003). Notably, whole somatic composition analysis is required for the examination of nutrient/energy utilization efficiency and/or ABV assessments.

Organoleptic properties, while not a common assessment, have been used to evaluate the potential impact of novel ingredients on product quality aspects. Such assessments have been more prevalent in studies on fish oil replacement than for fishmeal replacement, but some reports on influences of products such as rendered meat meals do exist (Thomassen & Rosjo 1989; Williams et al. 2003a,b).

An evaluation of immune responses and parameters associated with an immune challenge have also been effectively used in recent times (Hardy 1999; Krogdhal et al. 2000; Montero et al. 2003).

Strategies for evaluating the inclusion of ingredients

Limited-inclusion trials are perhaps the most typical ingredient evaluation trials used (Hughes 1988; Refstie et al. 1998; Carter & Hauler 2000; Table 5). In these studies, the test ingredient is substituted at either one or two levels only, usually for fishmeal, on a basis to maintain equivalent protein and energy levels. Advancements in this research strategy involve the substitutions being made on an equivalent digestible level. Such trials can be useful where experimental treatment options are limited, and the test ingredient comprises a substantial amount of the diet (e.g. 30%). While such studies provide a limited assessment of the utilization of the ingredient, the use of only a single inclusion level provides a relatively risky assessment with little capacity for extrapolation of effects.

Table 5. Experimental trial types used to study nutrient utilization and ingredient use limitations
Trial type Diet design specifics Ingredient inclusion Feeding strategy Examples
Limited inclusion All diets formulated to same relevant practical protein and energy levels (usually on a digestible basis) for use with respective aquaculture species One or two levels of inclusion Apparent satiety or fixed ration Hughes (1988), Refstie et al. (1998), Carter & Hauler (2000)
Serial inclusion All diets formulated to relevant practical protein and energy levels (usually on a digestible basis) for use with respective aquaculture species Included in a series of test diets at varying inclusion levels Apparent satiety or fixed ration Robaina et al. (1995), Burel et al. (1998), Farhangi & Carter (2001)
Summit dilution Reference diet formulated to relevant practical protein and energy levels for use with respective aquaculture species, but protein and energy specifications not maintained with progressive inclusion of test ingredients Included in a series of test diets at varying inclusion levels Apparent satiety or fixed ration Williams et al. (2003a,b), Booth & Allan (2003)
Limiting constraint All diets formulated to equivalent digestible protein and energy levels, but with protein provided at a reduced proportion of estimated requirements Included in a series of test diets at varying inclusion levels Restrictively pair-fed Glencross et al. (2003d, 2004b)

An advancement on the limited-inclusion trial is the serial-inclusion trial, where multiple (three or more) levels of the ingredient are evaluated (Burel et al. 1998; Farhangi & Carter 2001; Glencross et al. 2004c; Table 5). Typically, such trials have used a series of inclusion levels such as 0%, 10%, 20%, 30%, 40% and 50% inclusion of the test ingredient into a reference diet (0%). In these studies, two key diet mixes, usually the reference (0%) and the highest inclusion level are made in sufficiently large quantities to allow the creation of the other treatment diets by blending at the appropriate ratios. This strategy reduces diet preparation risk. This style of utilization trial provides substantially more information than the single-inclusion level trials. However, the practicality of the upper level of test ingredient inclusion has varied substantially with some attempts made at single ingredient inclusion levels as high as 70% of the diet. However, inclusion levels of new ingredients in commercial diets are unlikely to ever be greater than 25% of the diet, therefore increasing the number of inclusion levels over a practical range, rather than going to extreme inclusion levels adds greater value to a study. Therefore, we propose upper inclusion levels closer to 40–50% of the diet, will better focus on ingredient inclusion levels more likely to be used commercially.

A similar strategy was used by Higgs et al. (1983) to evaluate the nutritional value of different types of canola meals. However, they also adopted the use of a series of isoenergetic diets of various protein levels, with one diet specifically designed to be protein limited as a means of exacerbating nutrient utilization effects of fish fed the diets. In this study, the diets were fed to apparent satiety, and some of the capacity of the study to detect limitations to nutrient utilization was lost through variability in feed intake in the protein-limited diets, although significant effects of canola meal inclusion were noted in the higher protein diets, but not the protein-limited diets.

To avert the potentially confounding issue of unregulated feed intake of nutrient-limited diets, the use of limiting-constraint trials was advocated by Glencross et al. (2003d); Table 5) as a means of distinguishing the difference in nutritional value between two closely related varieties of lupin kernel meal. In this study, earlier experiments using commercially analogous diets and apparent satiety feeding strategies demonstrated no significant difference between the two lupin varieties, but subsequent re-evaluation based on diets formulated to equivalent digestible energy levels and reduced protein levels, with a pair-feeding regime used to minimize dietary intake differences, allowed clear significant differentiation in nutritional value between the two lupin varieties.

Summit-dilution trials (SDT) are another ingredient nutrient utilization assessment trial, which use a serial dilution principle with increasing amounts of a reference diet being replaced by a test ingredient (Booth & Allan 2003; Table 5). In these trials, the ingredient is substituted for whole diet, similar in principle to the DRM used for digestibility diets, without balancing of nutrients or energy. An important additional component of the SDT is the use of a series of controls, where the ingredient inclusion levels are matched with treatments where a similar amount of non-nutritional value filler (e.g. cellulose or diatomaceous earth) is incorporated into diets. These controls provide a reference to each respective test ingredient treatment by demonstrating the relative nutrient utilization achieved from that incorporation of the ingredient. Feeding of fish in such an experiment is performed on a restricted basis, so animals cannot simply increase intake to account for reductions in energy or nutrients (Booth & Allan 2003). The limitation with the SDT is that without balancing the diets to an equivalent digestible nutrient and energy levels, interpretation of the response effects exhibited may be confounded by nutrient and energy intake limitations.

Factors likely to affect nutrient and energy utilization of ingredients

Antinutritional factors have the capacity to have significant deleterious effects on nutrient utilization by fish. Notably, substantial effects of a range of ANF have already been noted on fish using a wide range of experimental techniques (Krogdhal et al. 1994; Bureau et al. 1998; Refstie et al. 1998; Burel et al. 2001; Francis et al. 2001; Glencross et al. 2003a, 2006). In defining the effects of ANF on fish, a variety of experimental strategies have been adopted.

Limiting-constraint trials are another way of defining the potential impacts of ANF on nutrient utilization by fish. By limiting the capacity of the fish to self-regulate protein and energy intake, any influences of the ANF on nutrient utilization are manifested as growth effect differences.

Assessment of tissue histology is one additional parameter that has been useful in examining some of the more long-term and chronic effects of ingredient and ANF inclusion in fish diets (Robaina et al. 1995; Krogdhal et al. 2000; Storebakken et al. 2000). Some specific parameters such as enteritic problems of the distal intestine, associated with use of soybean meals are a particular case in example (Krogdhal et al. 2000, 2003).

Ingredient functionality

Ingredient functionality is another facet of ingredient evaluation that is increasing in importance. Functionality of feed ingredients relates to their effects on the physical properties of the processed feed (Thomas & van der Poel 2001). For example, during extrusion processing, starch increases the porosity of the pellet through expansion of the starch structure during the starch gelatinization and expansion processes. However, there are numerous other physical attributes of aquaculture feeds, which can be influenced by ingredient choice. Similarly, there are also several methods that can be used to assess these attributes.

Clearly, the attributes sought are those where pellets produced from the formulation result in a product with properties that provide advantages for feeding aquatic species. These properties include aspects such as sink rates, pellet durability, degree of starch gelatinization and oil absorption capacity.

Experimental extrusion processing is clearly one of the better ways to evaluate ingredient functionality, as the results will have direct implications for a final product. In these studies, a hypothetical formulation including a test ingredient is run through an extruder and the properties of the pellets produced are compared with either a reference formulation or a series of target specifications.

Analysis of ingredient or formulation viscosity is a relatively new technique that has been used as a basis for examining the potential functionality of ingredients and their possible attributes under feed extrusion conditions (International Association for Cereal Science and Technology (ICC). 1995). Assessment of this has largely made use of rapid viscosity analysis (RVATM, Newport Scientific, Sydney, Australia) equipment, and is based on an examination of the gelling/pasting characteristics of an ingredient/formulation at varying temperatures, the peak viscosity and relative deterioration of that viscosity with time and temperature. From an ingredient evaluation perspective, the technique lends itself to a relatively rapid throughput, provided some relation is made between the RVA observations and those achieved under extrusion conditions.

Emerging technologies and issues for ingredient evaluation

Improving our basis for the assessment of ingredient nutrient/energy digestibility and availability is perhaps one of the key issues in maximizing opportunities for optimal use of feed ingredients for the aquaculture feed sector. While there have been substantial advances in this area in recent years, there is a clear need to improve our understanding of the processes of interactions among ingredients, nutrients and processing and how these influence digestibility. As the adoption of alternatives to fishmeal increases, there will probably be increasingly complex interactions among feed ingredients. The nature of such ingredient interactions may also have important implications for the study of ingredient functionality.

While there have been several in vitro assays developed for ingredient digestibility evaluation, there is yet to be a successful assay validated against in vivo digestibilities, which has gained widespread acceptance (Carter et al. 1999). Further progress in this area could provide a much-valued test for the assessment of ingredient quality.

The use of rapid analysis techniques for ingredient composition, such as near infrared spectroscopy (NIRS), has considerable potential to improve the basis for diet formulation from variable batches of raw ingredients. Although NIRS has gained almost routine use in many feed companies for the evaluation of crude protein, moisture and fat composition of ingredients and products the technology is now also beginning to be used for the assessment of digestible nutrients and energy from ingredients (Aufrere et al. 1996). Presently, the widespread adoption of the use of NIRS for such digestible nutrient and energy assessments is limited by the availability of robust digestibility data, on enough samples of specific ingredients, to develop competent calibration curves.

The use of exogenous enzymes has been partly explored in aquaculture nutrition for sometime, but has been largely limited to the evaluation of phytase enzyme preparations applied to some plant protein meals to improve phosphorus utilization and other minor effects (Mwachireya et al. 1999; Forster et al. 2000). However, substantial scope exists for the use of other enzymes, many of which are being used by the terrestrial animal feed sectors (Cowan et al. 1996; Castañón et al. 1997). The potential of some of these enzymes, such as amylase, α-galactosidase, β-glucanase and xylanase, has been explored (Glencross et al. 2003a; Stone et al. 2003). While these enzymes certainly improve nutrient (mostly carbohydrate) absorption across the gut, the effective utilization of these absorbed nutrients is yet to be fully studied. Other enzymes such as pectinases, hemicellulases, cellulases and their potential mode of application are yet to be examined. To realize this scope, significantly different strategies to those used with terrestrial animal feeds will be required to allow application of the enzymes with extrusion-produced fish feeds. Opportunities such as thermostable enzyme preparations and preprocessing of ingredients have been recognized (Bedford 2000).

There is some information on the mode of action of biologically active compounds on fish (beneficial and negative). Some of the more notable biologically active compounds, or ANF, such as glucosinolates, oligosaccharides, saponins, gossypol and trypsin-inhibitors have been studied to an extent (reviewed in Francis et al. 2001). However, most studies examined a specific ANF and focus on whether they exert an effect on the animal, but not the specific mode of action by which the compound affects the animal.

There is also limited information on the efficiencies of amino acid and energy utilization and how these differ between ingredients. While the determination of the digestible value of ingredients is the first basis for improving our understanding of the nutritional utilization of feeds and ingredients, this could be further improved by refining the understanding to a level specific to nutrient and energy utilization from feeds and ingredients. This would not only serve as a basis for improving our understanding of the nutritive values, but also for estimating an economic value on specific ingredients for inclusion in aquaculture feeds.

Conclusions

As aquaculture continues to develop, there will be an increasing need to use alternative raw materials in aquaculture diets. This fact alone places great importance on the need for careful and constructive experiment design in evaluating these raw materials. However, there is a clear and present need to ensure that ingredient evaluation experiments are designed to answer specific questions. By formulating succinct strategies based on the key aspects of ingredient use and choice, digestibility, palatability, utilization and functionality then the development of sustainable alternatives will be ensured.

Acknowledgements

We acknowledge the editorial support of Greg Maguire in preparation of this manuscript.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.