Use of a standardized protocol for searching and retrieving metadata would save resources currently spend developing multiple client and server applications that are functionally very similar, and make life easier for users because they could learn to use a single search client application that could then be used to search multiple catalogs.
Catalog client -- Many applications to search a metadata catalog currently exist (see Links to Operational Catalogs). Most (all?) of these applications are 'tightly coupled' -- i.e. the web page only searches one metadata registry database, typically using a protocol specific to that particular application. In an interoperable catalog scenario, the metadata databases would all implement a standard catalog service (along with the existing custom interfaces). A single client application could be connected to any of these databases to search the metadata. A single, open source client framework could be reused by anyone wishing to implement a structured metadata search capability. Such frameworks are under development, see exCat, CatalogConnector, GeoNetwork. Wide usage of a catalog search framework would provide users with a familiar interface that would then require less training.
Catalog server -- Tightly coupled client-server search applications require that an agency wishing to expose metadata for thier resources implement a search application specific to their metadata registry database. In an interoperable catalog scenario, in which existing clients can be used to search any metadata registry database that is 'published' using a standard catalog service, the agency would only have to implement the catalog service and make the interface public to enable search by existing clients.
Metadata propagation -- Currently a user may have to search multiple metadata registries using different user interfaces to be confident that they have looked everywhere. In a system of interoperable catalogs, metadata records from any registry might be harvested and cached by any other catalog in the system. Thus a resource registered in one metadata registry would eventually propagate through the system. Alternatively, single search clients might be programmed to search many catalogs and aggregate the results. Both of these scenarios would act to ensure that searches from a single search interface would reliably discover a high proportion (ideally all!) of relavent resources of interest.
Structured metadata -- Most internet users are now accustomed to single text-box free text searches (e.g. Google, Yahoo...) that return hundreds of results ranked by 'relavence'. These search engines use algorthms to determine relavence based on word associations in text, http linkages between resources, and statistics on user navigation between resources. Such searches work remarkably well for many everyday situations. The down side is the large number of irrelevant hits, the lack of information to asses the utility of the located resources for specific technical applications, the absence of information required to access the resource automatically except through http urls, and the inability to index resources that are not text-based and web accessible. Structured metadata enables scenarios in which very focused results can be obtained, the information returned by a search can provide good guidance on fitness for purpose, and the results can include sufficient information to allow automated linkage to non-http services.