INFO: OLE DB Readme Included with Data Access SDK Version 2.0

ID: Q191745

The information in this article applies to:


This article contains the OLE DB Readme file which is placed into the MSDASDK\DOC folder after installing the Microsoft Data Access SDK version 2.0.



(c) 1998 Microsoft Corporation. All rights reserved.

This document provides late-breaking or other information that supplements the Microsoft Data Access SDK documentation.




For a complete overview of OLE DB, see the OLE DB Overview in the Data Access SDK.


A number of new features, enhancements, tools, and components are included in version 2.0 of OLE DB to help developers leverage the power of OLE DB. Some of the major new features include:

   *  Additions to the OLE DB specification
   *  New components
   *  Updated components

These new features are described in more detail below.

2.1 Additions to the OLE DB Specification

The OLE DB 2.0 specification is a fully backward-compatible upgrade to the OLE DB 1.x specification. Major areas of enhancement include:

   *  The ability to express hierarchical data through rowsets.
   *  The ability to represent an index and data in the same rowset.
   *  The ability to create tables with constraints and to add or drop
      constraints through common methods.
   *  The addition of new data types for variable-length numeric, filetime,
      and propvariant data types.
   *  The ability to persist a command as a stored procedure or view
   *  New interfaces for persisting or loading connection information
      through strings.
   *  Numerous clarifications and fixes to the documentation.

2.2 New OLE DB Core Components

There are several new components provided as part of the core Microsoft Data Access Components:

   *  Data Links -- The Data Links component provides a common user
      interface for creating and managing connections to OLE DB data
      sources. To launch data links from an application, you can:

         * Call IPromptInitialize::PromptDatasource, or
         * Double-click Data Links in the Control Panel, or
         * Either double-click, or right-click and select Properties, from
           a data link (.udl) file.

   * Service Components -- OLE DB service components provide guaranteed
     functionality to OLE DB consumers above the minimum level required by
     OLE DB providers. The OLE DB 2.0 service components support:
     scrolling, Find, and bookmark support against any minimum level OLE DB
     provider; updating against SQL providers; resource pooling for pooling
     connections in a frequent connect/disconnect scenario; and automatic
     transaction enlistment in a Microsoft Transaction Server environment.

   * Data Shape Provider -- The OLE DB Data Shape Provider enables ADO or
     OLE DB consumers to execute hierarchical queries and navigate through
     hierarchical recordsets or rowsets.

   * Native SQL Server Provider -- The SQL Server native OLE DB provider
     provides direct access to SQL Server, without going through ODBC.

   * Native Oracle Provider -- The Oracle native OLE DB provider provides
     direct access to Oracle data, without going through ODBC. This native
     Oracle provider includes support for data types such as negative-scale
     numeric data types that cannot be expressed through ODBC.

   * Native OLE DB Provider for Microsoft Access files -- The OLE DB
     Provider for Microsoft Access files provides direct, efficient access
     to Microsoft Access .mdb files.

   * New Technical Papers -- For more information on OLE DB, see the
     technical papers included in:

        DA SDK Documentation \ Microsoft Data Access Technical Articles \
        OLE DB

2.3 Updated OLE DB Core Components

The following OLE DB core components have been significantly enhanced for the 2.0 release:

* Updated Header (Oledb.h) and Lib (Oledb.lib) Files -- The MSDASDK

  contains the most recent Oledb.h and Oledb.lib files. The Oledb.h file
  must be located first in the INCLUDE path, before any older versions. In
  the Integrated development environment for Visual C++ 6.0, this is
  accomplished by going to:

      Tools\Options\Directories\Include Files
      and putting Oledb.h from the MSDASDK before the one automatically
      included by Visual C++ 6.0.

* OLE DB Provider for ODBC -- The ODBC provider delivers rich, high-
  performance access to any ODBC database. The 2.0 version of this provider
  contains better handling of long data as well as performance and scaling
  enhancements. For more information, refer to the ODBC Provider Help in
  the Microsoft Data Access SDK under \OLE DB\OLE DB Providers\OLE DB ODBC

* Data Conversion Library -- The Data Conversion Library is a common
  conversion library that makes it easy for OLE DB provider writers to
  expose a rich set of data conversions to OLE DB consumers. The latest
  data conversion library provides support for the new data types and
  conversion options defined in the OLE DB 2.0 specification. For more
  information about using this library, see the Help in the Microsoft Data
  Access SDK under \OLE DB\OLE DB Core Components. Important Note: The
  updated version of the data conversion library requires the latest
  version of Oleaut32.dll. The updated version of Oleaut32.dll is available
  as part of Windows NT 4.0, or through the latest Windows 95 Service Pack.

2.4 Updated OLE DB Test Components

The following OLE DB test components have been significantly enhanced for the 2.0 release:

* Test Suites -- The test suites shipped as source code in the Data Access

  SDK 2.0 have been greatly improved. The new tests are more complete and
  at the same time more generic, with a number of provider assumptions
  removed. The tests that correspond to the minimal provider interfaces,
  along with a new set of basic ADO compatibility tests, are now supported
  as a proposed set of conformance tests.

* RowsetViewer -- The RowsetViewer is a sample application that was
  previously shipped with OLE DB 1.5. The 2.0 release of this component is
  greatly improved, and provides an easy way to call almost any method
  within your provider in an ad hoc fashion. Although the sample code is
  still provided so that you can see how this program works, the program
  itself now provides sufficient functionality for testing and debugging

* LTM -- The Local Test Manager is a tool for running the tests shipped in
  the DA SDK 2.0, or custom tests. LTM has been significantly improved and
  simplified for the 2.0 release. There is now a single executable file,
  which makes it easier to install, set up, and debug tests.

* ITest Ad Hoc Tool -- The version 2.0 release of the Test Tools has been
  enhanced to provide better support for creation and management of views
  through the ad hoc tool as well as representing OLE DB for OLAP dataset


OLE DB version 2.0 includes a proposed OLE DB leveling specification and conformance tests. They are highlighted in this version so developers can provide feedback prior to their final release. Each of these technologies is described in more detail below.

(For more information on conformance testing in general, see section 3 in the Data Access SDK release notes, DASDKreadme.txt.)

3.1 OLE DB Leveling Specification

OLE DB leveling improves interoperability between OLE DB data stores and applications by defining sets of interfaces and functionality. The leveling document describes two levels of functionality -- one for data stores and another for applications. The difference in functionality between the minimum data store functionality and the base consumer functionality is made up by components, called "service components," that will provide a common implementation of extended functionality where required by the application and not implemented by the data store.

* Data stores implementing at least the minimum provider functionality can

  be assured that they will work well with a wide variety of tools and
  applications. Data stores are encouraged to implement additional
  interfaces that expose native functionality above this minimum provider
  level, but must meet at least this level of functionality to be
  considered an interoperable OLE DB data store.

* Applications that consume only the base consumer interfaces can be
  assured that they will work well with any OLE DB-compliant data store.
  Applications can consume additional interfaces outside the base consumer
  level, but should have conditional code to handle providers that don't
  support that extended functionality if they want to work with any OLE DB-
  compliant data store.

NOTE: This leveling document is being presented as a proposed set of requirements for any OLE DB provider. OLE DB data stores should implement at least the functionality specified by the minimum provider level in order to interoperate with the greatest number of consumers and to prepare for future conformance programs.

3.2 OLE DB Conformance Tests

In conjunction with the leveling of the OLE DB specification described above, Microsoft will provide a set of conformance tests that verify that OLE DB data stores correctly implement at least the minimum provider functionality. The proposed conformance tests are shipped in the /Conformance/Tests/Proposed directory of the MSDASDK, and include the following files that test the indicated interfaces:

   Test Name            Interface

   IACCESSR             IAccessor
   ICOLINFO             IColumnsInfo
   ICNVTTYP             ICanConvertType
   IDBCRSES             IDBCreateSession
   IDBINIT              IDBInitialize
   IDBPRPTS             IDBProperties
   IGETDSO              IGetDatasource
   IOPENRW              IOpenRowset
   IPERSIST             IPersist
   IROWSET              IRowset
   IROWCHNG             IRowsetChange::SetData
   IROWDEL              IRowsetChange::Delete
   IROWIDEN             IRowsetIdentity
   IROWINFO             IRowsetInfo
   IROWNEW              IRowsetChange::InsertRow
   ISESPRPT             ISessionProperties
   THREADS              Free-threaded tests for providers marked "Free" or
   DATALITE             Data Conversion tests

To prepare for the conformance tests, OLE DB 2.0 data stores are encouraged to implement the minimum provider functionality and run the set of proposed tests that correspond to that minimum provider level. Data stores that support additional functionality should also run the tests that correspond to that additional functionality. Additional tests for extended functionality are also available in an unsupported directory included in the MSDASDK.

For more information on the conformance tests, see the "Conformance Testing in OLE DB 2.0" paper in the Data Access SDK under \Microsoft Data Access SDK\Microsoft Data Access Technical Articles. (Open the Overview, then click the link to the paper.) Also see the release notes for Conformance Testing 2.0 in the Data Access SDK release notes, DASDKReadme.txt.


4.1 Connecting to a Provider

In OLE DB 1.x, consumers generally connected to a provider through the root enumerator, or by passing the provider's CLSID to CoCreateInstance, such as the following:

   //Create an instance of CLSID_MSDASQL

In order to take advantage of the common services provided as part of OLE DB 2.0, OLE DB consumers need to create provider instances by calling methods in either IDataInitialize or IDBPromptInitialize. The IDataInitialize interface is supported through the Service Component Manager, which can be instantiated by using CLSID_MSDAINITIALIZE. The IDBPromptInitialize interface is supported by the Data Links component, which can be instantiated by using CLSID_DataLinks.

For example, to create an instance of a provider that can take advantage of OLE DB 2.0 services, based on the provider's CLSID, the above code would look like the following:

   //Create an instance of the OLE DB Initialization Component

  //Create an instance of CLSID_MSDASQL with supported Services
     pIDataInitialize->CreateDBInstance(CLSID_MSDASQL, NULL,
         NULL, IID_IDBInitialize,(IUnknown**)pIDBInitialize);

4.2 Visual C++ Requirements for Alpha

On Alpha, OLE DB 2.0 requires Visual C++ 5.0, Service Pack 3, or later.

4.3 The colid Member of DBPROP Structure

Properties are defined by a DBPROP structure that includes the property ID, options, status, and value. Because some properties apply to individual columns, the DBPROP structure also contains a colid element to specify the column to which the property applies. For properties that don't apply to individual columns, the colid element is not used by the provider.

However, it is important to note that the OLE DB consumer must set this element to DB_NULLID for properties that do not apply to columns, as opposed to leaving it uninitialized. Leaving the element uninitialized will cause problems for remoting or other services that copy DBPROP structures without knowledge of which properties apply to columns and which do not.

4.4 OLE DB Resource Pooling

4.4.1 Overview

OLE DB 2.0 provides common services that improve the native functionality and performance of the OLE DB provider. Services include the ability to scroll or find over provider's rowsets that don't natively support such functionality, as well as performance and scaling features such as resource pooling and automatic transaction enlistment within a Microsoft Transaction Server environment.

The resource pooling and automatic transaction enlistment features of the OLE DB 2.0 service components go a long way toward helping providers and consumers build good, scalable applications. However, there are rules that both the provider and consumer must take into account in order to make the most out of these services. The following sections describe how OLE DB resource pooling and transaction enlistment work, and how to leverage them from your provider or application code.

4.4.2 Details

OLE DB services are automatically invoked any time the consumer creates an OLE DB data source object through IDataInitialize or IDBPromptInitialize. OLE DB services are automatically invoked by default when using ADO.

When the application creates an OLE DB data source object by using one of the above methods, OLE DB services query the provider for supported information, and provide a proxy data source object (DPO) to the application. This DPO appears to the consuming application like any other data source object, but setting properties merely caches the information in the local proxy.

When the application calls IDBInitialize::Initialize(), the DPO checks whether any connections already exist that match the specified connection information and are not in use. If so, rather than creating a new object, setting properties, and establishing a new connection to the database, the DPO merely uses the existing initialized data source object.

When the application releases the data source object, it is returned to the pool. Any data source that is released by the application and not reused after 60 seconds is automatically released.

Immediately after initializing the data source, the resource pool creates a session and caches it internally. The first time the application asks for a Session object, the DPO returns this cached session. When the application releases the session, the resource pool continues to hold onto it until the data source is timed out. This is because transaction enlistment goes on at the session level and can be very expensive. By holding onto the Session object, the resource pool can ensure that multiple connections enlisted in the same transaction don't have to re-enlist each time.

OLE DB resource pooling implements multiple, homogeneous pools. That is, there is a separate pool of connections for each combination of connection information used. This makes it very fast to identify a candidate connection. To find a match, it is not necessary to compare multiple different types of connections within one pool. Because the pool is locked until a match is found, implementing these separate pools reduces contention, which is extremely important in a scalable environment. (For more information about contention, see section 4.4.3 in this document.)

The OLE DB services also cache provider information, such as initialization properties, default registry information, and even the provider's class factory. After this information has been obtained for a provider, OLE DB services never have to ask again. Registry lookups can be expensive and retrieving property information requires task memory allocations. Therefore, reducing these calls greatly improves scalability and performance.

Within a Microsoft Transaction Server environment, the OLE DB services automatically detect whether or not the calling thread is in a distributed transaction, and enlist the connection in the transaction, if necessary, by calling the provider's ITransactionJoin interface.

4.4.3 Writing Scalable OLE DB Providers

There are a few simple rules for writing good, scalable OLE DB providers. Most rules for working in a multithreaded environment have to do with making sure that one thread isn't blocking the other threads from doing work.

The first rule in writing scalable, multithreaded providers is to minimize the use of global critical sections. Global critical sections prevent other threads from doing work until the thread that holds a lock completes. When providers make use of global critical sections, adding more threads actually hurts performance because the threads continually block each other. This thread-blocking is known as "contention."

A second rule for writing good scalable providers is to reduce the number of memory allocations. The use of mpheap, which is shipped as part of MSDN, improves memory allocation and management in Windows NT 4.0 by allocating application memory from large blocks of memory acquired from the operating system and being less aggressive about freeing/compacting that memory. Note that the Windows NT 5.0 memory allocator will have much of this memory management built in.

When memory is allocated by one component and freed by another, the memory must be allocated through the operating system's task memory allocator. Memory acquired and freed by using this allocator is expensive, so providers should ensure that the only time they use the task memory allocator is for memory that must be passed off to the consumer. Providers should never use the task memory allocator for allocating memory that they will free themselves.

Within a Microsoft Transaction Server environment, providers must support distributed transactions if packages are marked as anything other than nontransactional. To ensure that your provider can be used for transactional components within Microsoft Transaction Server, you must support ITransactionJoin to enlist in a distributed transaction. You should also support ITransactionJoin with a null value to unenlist from a transaction that has been completed.

Finally, to leverage the scaling support built into the OLE DB services, you must ensure that your provider works well with OLE DB pooling. Working with OLE DB Pooling

To work well with OLE DB pooling, or with any OLE DB service, your provider must support aggregation of all objects. This is a requirement of any OLE DB 1.5 or later provider. It is critical for leveraging services. Providers that don't support aggregation cannot be pooled, and no additional services will be provided.

To be pooled, providers must support the free thread model, or at minimum, the rental thread model. The resource pool determines the provider's thread model according to the DBPROP_THREADMODEL property.

If the provider has a global connection state that may change while the data source is in an initialized state, it should support the new DBPROP_RESETDATASOURCE property. This property is called before a connection is reused, and gives the provider the opportunity to clean up state before its next use. If the provider cannot clean up some state associated with the connection, it can return DBPROPSTATUS_NOTSETTABLE for the property, and the connection will not be reused.

Providers that connect to a remote database and can detect whether or not that connection may be lost should support the DBPROP_CONNECTIONSTATUS property. This property gives the OLE DB services the ability to detect dead connections and make sure they are not returned to the pool.

Finally, automatic transaction enlistment generally does not work unless it is implemented at the same level that pooling occurs. Providers that support automatic transaction enlistment themselves should support disabling this enlistment by exposing the DBPROP_INIT_OLEDBSERVICES property and disabling enlistment if the DBPROPVAL_OS_TXNENLISTMENT is deselected.

4.4.4 Leveraging Pooling in your OLE DB Application

To leverage pooling in your application, you must make sure OLE DB services are invoked by obtaining your data source through IDataInitialize or IDBPromptInitialize. If you directly use CoCreateInstance to invoke the provider based on the provider's CLSID, no OLE DB services will be invoked.

The OLE DB services will maintain pools of connected data sources as long as a reference to IDataInitialize or IDBPromptInitialize is held, or as long as there is a connection in use. Pools will also be maintained automatically within a Microsoft Transaction Server or Internet Information Server (IIS) environment. If your application will take advantage of pooling outside of a Microsoft Transaction Server or IIS environment, you should keep a reference to IDataInitialize or IDBPromptInitialize, or hold onto at least one connection. To make sure that the pool does not get destroyed when the last connection is released by the application, keep the reference or hold onto the connection for the lifetime of your application.

OLE DB services identify the pool from which the connection will be drawn at the time of Initialize. After the connection is drawn from a pool, it cannot be moved to a different pool. Therefore, avoid doing things in your application that will change initialization information, such as calling UnInitialize, or calling QueryInterface for a provider-specific interface prior to calling Initialize. Also, connections established with a prompt value other than DBPROMPT_NOPROMPT will not be pooled. However, the initialization string retrieved from a connection established through prompting can be used to establish additional pooled connections to the same data source.

Some providers must make a separate connection for each session. These additional connections must be separately enlisted in the distributed transaction, if one exists. OLE DB services will cache and reuse a single session per data source, but if the application requests more than one session at a time from a single data source, the provider may end up making additional connections and doing additional transaction enlistments that are not pooled. It is actually more efficient to create a separate data source for each session in a pooled environment than to create multiple sessions from a single data source.

Finally, because ADO automatically makes use of OLE DB services, you can simply use ADO to establish connections and the pooling and enlistment will happen automatically!

4.4.5 Enabling/Disabling OLE DB Services

The OLE DB Service Component Manager compares the properties specified by the consumer to those supported by the provider in order to determine whether or not individual service components could be invoked in order to satisfy extended functionality requested by the consumer. For example, if an application requests a scrollable cursor and the provider only supports a forward-only cursor, the Service Component Manager will invoke the Client Cursor Engine service component in order to provide scrollable functionality. If the application is relying on extended functionality supported by default on the provider's rowset, and the application does not explicitly set the properties to request that functionaltiy, the functionality may not appear on the rowset returned by the Client Cursor Engine. To be interoperable, applications should always set properties to explicitly request extended functionality where needed.

In some cases, it may be necessary to disable individual OLE DB services in order to work well with existing applications that make assumptions about the characteristics of a provider. OLE DB services provide the ability to disable individual services, or all services, either on a connection-by- connection basis or for all applications using a single provider. Enabling/Disabling Services for a Provider

Individual OLE DB services can be enabled or disabled by default for all applications that access a single provider. This is done by adding an OLEDB_SERVICES registry entry under the provider's CLSID, with a DWORD value specifying the services to enable or disable as follows:

   Default Services Enabled                     Keyword Value

   All services (the default)                   0xffffffff
   All except Pooling and AutoEnlistment        0xfffffffe
   All except Client Cursor                     0xfffffffb
   All except pooling, enlistment, and cursor   0xfffffff0
   No services                                  0x00000000
   No aggregation, all services disabled        <missing key> Overriding Provider Service Defaults

The provider's registry value for OLEDB_SERVICES is returned as the default value for the DBPROP_INIT_OLEDBSERVICES initialization property on the data source object.

As long as the registry entry exists, the provider's objects will be aggregated and the user can override the provider's default setting for enabled services by setting the DBPROP_INIT_OLEDBSERVICES property prior to initialization. To enable or disable a particular service, the user will generally get the current value of the DBPROP_INIT_OLEDBSERVICES property, set or clear the bit for the particular property to be enabled or disabled, and reset the property. DBPROP_INIT_OLEDBSERVICES can be set directly in OLE DB, or in the connection string passed to ADO or IDataInitialize::GetDatasource. The corresponding values to enable/disable individual services are listed below:

   Default Services Enabled                    Property Value

   All services                                DBPROPVAL_OS_ENABLEALL
   All except Pooling and AutoEnlistment       (DBPROPVAL_OS_ENABLEALL
   All except Client Cursor                    (DBPROPVAL_OS_ENABLEALL
   All except pooling, enlistment, and cursor  (DBPROPVAL_OS_ENABLEALL
   No services                                 ~DBPROPVAL_OS_ENABLEALL

   Default Services Enabled                      Value in Connection String

   All services (the default)                    "OLE DB Services = -1;"
   All except Pooling and AutoEnlistment         "OLE DB Services = -2;"
   All except Client Cursor                      "OLE DB Services = -5;"
   All except pooling, enlistment, and cursor    "OLE DB Services = -7;"
   No services                                   "OLE DB Services = 0;"

If the registry entry does not exist for the provider, the Component Managers will not aggregate the provider's objects, and no services will be invoked, even if explicitly requested by the user.

4.5 Columns Added by DBPROP_UNIQUEROWS Not Included in GetColumnsInfo


OLE DB 2.0 adds a new property, DBPROP_UNIQUEROWS, that allows the provider to add columns to uniquely identify each row of the rowset. The information for these columns is available in IColumnsRowset and IColumnsInfo. However, the count of columns returned by the pcCols argument of IColumnsInfo::GetColumnsInfo does not include these added columns.


OLE DB defines properties to describe the rowset's ability to fetch or scroll backward. The textual descriptions listed for these properties in the OLE DB 2.0 Programmer's Reference are "Fetch Backward" and "Scroll Backward," respectively. Note that the actual textual description should be "Fetch Backwards" and "Scroll Backwards," as they were defined in OLE DB 1.x.

4.7 Rowset Properties and Non-Row Returning Commands

Properties may be set on a Command object before execution to influence how the command is executed. Some of these properties, such as COMMANDTIMEOUT, affect any command execution. Other properties, such as DBPROP_CANHOLDROWS, affect the resulting rowset, if any. Properties that don't apply to a particular statement, such as rowset properties set on a non-row returning command, should be ignored by the provider when executing the command.

4.8 Next Fetch Position and IRowsetFind::FindNextRows

Calling IRowsetFind::FindNextRows with a cbBookmark value of zero searches for a column value relative to the current GetNextRows position. Following the call to FindNextRows, the fetch for GetNextRows is also moved. If a valid bookmark is passed to FindNextRows, then the fetch position of GetNextRows is unchanged.

According to the OLE DB 2.0 Programmer's Reference, calling FindNextRows with a cbBookmark value of zero and a cRows value of zero doesn't fetch any rows but moves the fetch position to the next match or off the end of the rowset if no match is found. However, the Programmer's Reference doesn't say whether the position is before or after the next match.

Calling FindNextRows with a cbBookmark value of zero and a cRows value of zero moves the next fetch position to the same location as calling FindNextRows with a cbBookmark value of zero and a cRows value of one. In general, it is not useful to call FindNextRows with a cRows value of zero. Furthermore, because of this ambiguity in the Programmer's Reference, consumers should not rely on the fetch position following a call to FindNextRows with a cbBookmark value of zero and a cRows value of zero.

4.9 Null pFindValue in IRowsetFind::FindNextRow

The description of the pFindValue argument in IRowsetFind::FindNextRow states that "If this value is NULL, it is compared to other values according to the DBPROP_NULLCOLLATION property returned in the Data Source Information property set." Note that this refers to a pFindValue that indicates a null column value, via a status value of DBSTATUS_S_ISNULL, or a variant of type VT_NULL. It is an error (E_INVALIDARG) to pass a null pointer for pFindValue.

4.10 Changing the Current Catalog

Some providers support changing the current catalog (database) on a connection through provider-specific mechanisms, such as executing SQL SET statements. OLE DB provides a data source property, DBPROP_CURRENTCATALOG, for doing this. Consumers should use this common property mechanism, as opposed to executing provider-specific statements, in order to ensure interoperability and to prevent inconsistent provider states.

4.11 Enlisting in Distributed Transaction Outcome Events

OLE transactions define an interface, ITransactionOutcomeEvents, that components enlisted in a distributed transaction can use in order to get notified of the outcome of a transaction. The OLE DB specification suggests that providers register for these outcome events prior to enlisting in a distributed transaction. However, calling the provider's ITransactionOutcomeEvents can be expensive, as it results in an extra message. Further, in some cases the Microsoft DTC does not fire these events, which results in memory leaks from references on the connection sink not being released, and in connections being held longer than necessary. For these reasons, providers should avoid registering for transaction outcome events.

4.12 Specification Addendums

4.12.1 IRowsetResynch

IRowsetResynch is an interface defined in OLE DB 1.x to allow consumers to retrieve the current values for rows that may have been changed in the data store since retrieved. In OLE DB 2.0, IRowsetResynch is superceded by IRowsetRefresh, which provides better control over when data values are updated from the data store.

In future releases of OLE DB, a common service component will expose IRowsetRefresh over providers that currently expose only IRowsetResynch. In the interim, consumers can work with legacy providers that support only IRowsetResynch by directly calling that interface. This interface is documented here for such consumers. IRowsetResynch::GetVisibleData

Gets the data in the data source that is visible to the transaction for the specified row.

     HRESULT GetVisibleData (
        HROW         hRow,
        HACCESSOR    hAccessor,
        void *       pData);


hRow [in]

   The handle of the row for which to get the visible data.
   This can be the handle of a row with a pending delete.

hAccessor [in]
   The handle of the accessor to use. If hAccessor is the handle
   of a null accessor (cBindings in IAccessor::CreateAccessor was
   zero), then GetVisibleData does not get any data values.

pData [out]
   A pointer to a buffer in which to return the data. The
   consumer allocates memory for this buffer.

Return Code


   The method succeeded. The status of all columns bound by
   the accessor is set to DBSTATUS_S_OK, DBSTATUS_S_ISNULL,

   An error occurred while returning data for one or more
   columns, but data was successfully returned for at least
   one column.

   A provider-specific error occurred.

   pData was a null pointer and hAccessor was not a null accessor.

   ITransaction::Commit or ITransaction::Abort was called and the
   object is in a zombie state.

   hAccessor was invalid. It is possible for a reference accessor
   or an accessor that has a binding that uses provider-owned
   memory to be invalid for use with this method, even if the
   accessor is valid for use with IRowset::GetData or

   The specified accessor was not a row accessor.

   hRow was invalid.

   hRow referred to a row for which a deletion had been transmitted
   to the data source.

   Errors occurred while returning data for all columns. To
   determine what errors occurred, the consumer checks the
   status values.

   The provider was unable to retrieve the visible data due
   to reaching a limit on the server, such as a query execution
   timing out.

   to a row for which an insertion had been transmitted to the
   data source.

   The provider called a method from IRowsetNotify in the
   consumer and the method has not yet returned.

   The rowset was in delayed update mode and hRow referred to
   a pending insert row.

If this method performs deferred accessor validation and that validation takes place before any data is transferred, it can also return any of the following return codes for the applicable reasons listed in the corresponding DBBINDSTATUS values in IAccessor::CreateAccessor:



This method makes no logical change to the state of the object.

A consumer calls GetVisibleData to retrieve the data in the data source that is visible to the transaction for the specified row. However, GetVisibleData does not affect the values in the rowset's copy of the row.

If GetVisibleData fails, the memory to which pData points is not freed but its contents are undefined. If, before GetVisibleData failed, the provider allocated any memory for return to the consumer, the provider frees this memory and does not return it to the consumer. IRowsetResynch::ResynchRows

Gets the data in the data source that is visible to the transaction for the specified rows and updates the rowset's copies of those rows.

     HRESULT ResynchRows (
        ULONG            cRows,
        const HROW       rghRows[],
        ULONG*           pcRowsResynched,
        HROW**           prghRowsResynched,
        DBROWSTATUS**    prgRowStatus);


cRows [in]

   The count of rows to resynchronize. If cRows is zero, ResynchRows
   ignores rghRows and reads the current value of all active rows.

rghRows [in]
   An array of cRows row handles to be resynchronized. If cRows is
   zero, this argument is ignored.

pcRowsResynched [out]
   A pointer to memory in which to return the number of rows the
   method attempted to resynchronize. The caller may supply a null
   pointer if no list is desired. If the method fails, the provider
   sets *pcRowsResynched to zero.

prghRowsResynched [out]
   A pointer to memory in which to return the array of row handles
   the method attempted to resynchronize. If cRows is not zero, then
   the elements of this array are in one-to-one correspondence with
   those of rghRows. If cRows is zero, the elements of this array
   are the handles of all active rows in the rowset. When cRows is
   zero, ResynchRows will add to the reference count of the rows whose
   handles are returned in prghRowsResynched.

   The rowset allocates memory for the handles and the client should
   release this memory with IMalloc::Free when no longer needed. This
   argument is ignored if pcRowsResynched is a null pointer and must
   not be a null pointer otherwise. If *pcRowsResynched is 0 on output
   or the method fails, the provider does not allocate any memory and
   ensures that *prghRowsResynched is a null pointer on output.

prgRowStatus [out]
   A pointer to memory in which to return an array of row status
   values. The elements of this array correspond one-to-one with
   the elements of *prghRowsResynched. If no errors occur while
   resynchronizing a row, the corresponding element of
   *prgRowStatus is set to DBROWSTATUS_S_OK. If the method fails
   while resynchronizing a row, the corresponding element is set
   as specified in DB_S_ERRORSOCCURRED. If prgRowStatus is a null
   pointer, no row status values are returned.

   The rowset allocates memory for the row status values and
   returns the address to this memory; the client releases this
   memory with IMalloc::Free when it is no longer needed. This
   argument is ignored if pcRowsResynched is a null pointer.
   If *pcRowsResynched is zero on output or the method fails,
   the provider does not allocate any memory and ensures that
   *prgRowStatus is a null pointer on output.

Return Code


   The method succeeded. All rows were successfully resynchronized.
   The following value can be returned in *prgRowStatus:
   * The row was successfully resynchronized. The corresponding
     element of *prgRowStatus contains DBROWSTATUS_S_OK.

   An error occurred while resynchronizing a row, but at least
   one row was successfully resynchronized. Successes can occur
   for the reason listed under S_OK. The following errors can
   * An element of rghRows was invalid or referred to a row that
     this thread does not have access to. The corresponding
     element of *prgRowStatus contains DBROWSTATUS_E_INVALID.
   * Resynchronizing a row was canceled during notification.
     The row was not resynchronized and the corresponding element
     of *prgRowStatus contains DBROWSTATUS_E_CANCELED.
   * An element of rghRows referred to a row for which a
     deletion had been transmitted to the data source. The
     corresponding element of *prgRowStatus contains
   * The row was not resynchronized due to reaching a limit
     on the server, such as a query execution timing out.
     The error in the corresponding element of *prgRowStatus
   * An element of rghRows referred to a row on which a
     storage object was open. The corresponding element of
     *prgRowStatus contains DBROWSTATUS_E_OBJECTOPEN.
   * An element of rghRows referred to a pending insert row.
     The corresponding element of *prgRowStatus contains
     of rghRows referred to a row for which an insertion had
     been transmitted to the data source. The row was not
     resynchronized and the corresponding element of
     *prgRowStatus contains DBROWSTATUS_E_NEWLYINSERTED.

   A provider-specific error occurred.

   cRows was not zero and rghRows was a null pointer.
   pcRowsResynched was not a null pointer and prghRowsResynched
   was a null pointer.

   ITransaction::Commit or ITransaction::Abort was called and
   the object is in a zombie state.

   Errors occurred while resynchronizing all of the rows.
   Errors can occur for the reasons listed under

   The provider called a method from IRowsetNotify in the
   consumer and the method has not yet returned.

   The consumer did not have sufficient permission to
   resynchronize the rows.


ResynchRows refreshes the values in the rowset's copy of each of the specified rows with the currently visible contents of the underlying row. Changes made to the row by the current transaction are always visible to ResynchRows, including changes made by other rowsets in the same transaction. Whether changes made by other transactions are visible to ResynchRows depends on the isolation level of the current transaction.

If a specified row has been deleted from the data source and this deletion is visible, ResynchRows returns DBROWSTATUS_E_DELETED in the error status array for the row and the row is treated as a deleted row.

Any changes transmitted to the data source are not lost; they will be committed or aborted when the transaction is committed or aborted. All pending changes are lost because they exist only in the rowset's copy of the row and ResynchRows overwrites the contents of this copy. The pending change status is removed from the row.

4.12.2 IDcInfo

OLE DB 2.0 includes a redistributable Data Conversion Library (MSDADC.DLL) that providers can use to perform common conversions. IDcInfo provides methods for the provider using this conversion library to provide information, such as provider version, that may affect how conversions are performed. If IDcInfo::SetInfo is not called, the Data Conversion Library assumes the provider is a 1.0-compliant provider.

IDcInfo can be obtained by calling QueryInterface from IDataConvert by using IID_IDcInfo. The documentation of the IDcInfo interface follows. IDCInfo::GetInfo

   HRESULT GetInfo
           ULONG        cInfo,
           DCINFOTYPE   rgeInfoType[],
           DCINFO       **prgInfo);


cInfo [in]

   The number of settings for which to return the information.

rgeInfoType[] [in]
   An array of DCINFOTYPE structures. The indicator of the
   information type. The Data Conversion component supports
   the following infotype:

     DCINFOTYPE            Description
     ==========            ===========

     DCINFOTYPE_VERSION    The OLE DB version of the provider

   A pointer to memory in which to return an array of DCINFO

The DCINFO structure is:
  typedef struct  tagDCINFO {
      DCINFOTYPE eInfoType;
      VARIANT vData;
      } DCINFO;

The elements of this structure are used as follows:

     Element       Description
     =======       ===========

     eInfoType     The type of information.
     vData         A VARIANT that contains the information to
                   be set. For DCINFOTYPE_VERSION, the variant
                   type is VT_UI4. IDCInfo::SetInfo

HRESULT SetInfo(ULONG     cInfo, DCINFO    rgInfo[]);


cInfo [in]

   The number of settings for which to set version information.

rgInfo[] [in]
   An array of cInfo DCINFO structures.


5.1 OLE DB Specification Issues

This section details known issues or limitations with the OLE DB 2.0 specification.

5.1.1 Support for Aggregation Required

Each method in OLE DB that generates a new object lists a possible return code of DB_E_NOAGGREGATION. OLE DB 1.5 and later providers must support aggregation. For OLE DB 1.5 or later providers, this error should only be returned when the user specifies a controlling unknown and does not request IID_IUnknown. The description of this error should be changed as follows:

   pUnkOuter was not a null pointer, and the provider is an OLE DB
   1.0 or 1.1 provider that does not support aggregation of the
   object being created.

   pUnkOuter was not a null pointer, and riid was not IID_IUnknown.

5.1.2 Fetch Position After Calling FindNextRow with cRows=0

The description of the cRows argument to FindNextRow calls out that, if cRows is 0 and there are no other errors, no rows are fetched, but the fetch position for FindNextRow is moved to the next match. Note that this is only true if cbBookmark is also zero, indicating that the search was started from the next fetch position. If cbBookmark is nonzero, then the fetch position is never changed.

Also, the specification doesn't state whether the fetch position is before or after the next match. The fetch position should be set as if FindNextRow was called with cRows=1. Calling FindNextRow with cRows=0 is functionally equivalent to calling FindNextRow with cRows=1 and then releasing the retrieved hRow. In general, consumers should not rely on the next fetch position after calling FindNextRow with cRows=0 and cbBookmark = 0, as some providers may not position correctly.

5.1.3 Property Description Format

OLE DB consumers, such as the Data Links component, may rely on certain formatting for property descriptions. Note that the property descriptions returned by GetPropertyInfo should not be localized, and providers should use the provided descriptions for OLE DB-defined properties. In addition, property descriptions must be unique across all initialization properties, and must not contain the equal sign, or single or double quotation marks.

5.1.4 DB_E_BADSTARTPOSITION Fetching Off a Rowset

OLE DB 1.x attempted to distinguish between fetching immediately before or after the rowset (DB_S_ENDOFROWSET) versus fetching more than one row before or after the rowset (DB_E_BADSTARTPOSITION). OLE DB 2.0 providers should always return DB_S_ENDOFROWSET when the user attempts to fetch off either end of the rowset. However, consumers should be aware that OLE DB 1.x providers, and some OLE DB 2.0 providers, may continue to return DB_E_BADSTARTPOSITION when positioning more than one row before the first row, or more than one row after the last row, of the rowset.

5.1.5 Quoting Names in IOpenRowset::OpenRowset

The Comments section of IOpenRowset::OpenRowset says that consumers should supply fully qualified names as pTableID on providers that support catalog or schema names, as described in "Fully Qualified Names" in Chapter 4. Chapter 4, "Creating a Rowset with IOpenRowset," specifies that consumers may need to construct a fully qualified table name, and describes quoting of the individual values of the table name, but does not define the cases under which consumers should qualify names.

Consumers should provide qualified table names only to reference tables in other schemas or catalogs. Further, consumers should not quote table names unless such quoting is required to make table names unambiguous (for example, to enforce case sensitivity). Providers for which quoting a simple table name has no meaning may not support quoting tables in OpenRowset while they do support quoting tables as part of a command text for parsing reasons. Providers should not require table names to be quoted, and should guarantee that unquoted table names correctly open the specified table, even if the table name contains special characters, as long as the table can be unambiguously identified without quoting.

5.1.6 Returned String Buffers

For method calls that return multiple strings within an allocated string buffer, consumers should not make assumptions about the relationship between the individual string pointers and the returned string buffer. Specifically, the consumer should always treat the returned string values as read-only memory, and should not assume that the individual string values all occur within the string buffer. The only requirement on the provider is that freeing the string buffer releases any task memory allocated for that call. This applies to the ppDescBuffer returned by GetPropertyInfo to hold the pwszDescription elements of the DBPROPINFO structure, as well as the ppStringsBuffer returned by GetColumnInfo to return string values contained within the DBCOLUMNINFO structure.

5.1.7 Disabling Individual Services

The description of the DBPROP_INIT_OLEDBSERVICES property states that individual services may be deselected by specifying the bitwise-OR of DBPROPVAL_OS_ENABLEALL along with the bitwise complement of any services to be deselected. In fact, individual services are deselected by specifying the bitwise-AND of DBPROPVAL_OS_ENABLEALL along with the bitwise complement of any services to be deselected.

For example, DBPROPVAL_OS_ENABLEALL & ~DBPROPVAL_OS_TXNENLISTMENT enables all services except automatic transaction enlistment in a Microsoft Transaction Server environment.

5.1.8 Specifying a Null punkTransactionCoord in ITransactionJoin::JoinTransaction

Consumers may specify a null punkTransactionCoord in ITransactionJoin::JoinTransaction to unenlist from a coordinated transaction. The documentation currently states that this is an error (E_INVALIDARG). It is only legal to unenlist from the coordinated transaction when the coordinated transaction has completed and there is no pending work. Calling JoinTransaction at any other time with punkTransactionCoord returns XACT_E_XTIONEXISTS.

5.1.9 State of a Session After a Coordinated Transaction Has Completed

The OLE DB specification does not define the state of a session once the coordinated transaction in which it is enlisted completes. Once a session is enlisted in a coordinated transaction, and that coordinated transaction completes with no retaining semantics, the session is temporarily in a zombie state, although the provider may not detect this state until the consumer executes a method that communicates with the data store. In order to return the session from a zombie state, the consumer must call ITransactionJoin::JoinTransaction with a valid punkTransactionCoord to enlist in a new coordinated transaction, or with a null punkTransactionCoord in order to unenlist from the coordinated transaction.

5.1.10 New Columns in PROCEDURE_PARAMETERS Rowset

OLE DB 2.0 added two new columns, TYPE_NAME and LOCAL_TYPE_NAME, to the PROCEDURE_PARAMETERS schema rowset. Note that 1.x providers do not support these columns, and some OLE DB 2.0 providers may not initially support these new columns. Consumers should be prepared to handle providers that do not support these columns.

5.1.11 New Columns in FOREIGN_KEYS Rowset

OLE DB 2.0 added three new columns, FK_NAME, PK_NAME, and DEFERRABILITY, to the FOREIGN_KEYS schema rowset. Note that 1.x providers do not support these columns, and some OLE DB 2.0 providers may not initially support these new columns. Consumers should be prepared to handle providers that do not support these columns.

5.1.12 New Column in PRIMARY_KEYS Rowset

OLE DB 2.0 added a new column, PK_NAME, to the PRIMARY_KEYS schema rowset. Note that 1.x providers do not support this column, and some OLE DB 2.0 providers may not initially support this new column. Consumers should be prepared to handle providers that do not support this column.


OLE DB 2.0 added three new convert flags, DBCONVERTFLAGS_ISFIXEDLENGTH, DBCONVERTFLAGS_ISLONG, and DBCONVERTFLAGS_FROMVARIANT. These three flags can coexist with either DBCONVERTFLAGS_COLUMN or DBCONVERTFLAGS_PARAMETER to provide additional information for determining whether the DBTYPE specified in dwFromType can be converted to the DBTYPE specified in wToType. Note that these flags are not supported by OLE DB 1.x providers, and may not be supported by OLE DB 2.0 providers. These flags are also not supported by the OLE DB 2.0 Data Conversion Library because they primarily affect how the provider deals with the data. Consumers that specify these flags should be prepared for providers to return DB_E_BADCONVERTFLAG if they don't support the flags.

5.1.14 Length of BLOB Columns

The OLE DB documentation states that consumers can obtain the total number of bytes to be written to a stream by binding to the DBPART_LENGTH when retrieving an ISequentialStream interface over the data. Note that binding to the length of a BLOB bound as a storage object may be expensive for some providers, and that 1.x and some 2.0 providers may return sizeof(IUnknown*) for the length when binding to ISequentialStream instead of the actual number of bytes to be passed in the stream. Consumers should bind to the length when retrieving an ISequentialStream only if they need to know the total length before reading the data, and should be prepared to handle providers that fail to return this information.

5.1.15 Error in Description of DB_E_ERRORSOCCURRED

The description of DB_E_ERRORSOCCURRED in many cases ends with the sentence "This method can fail to set properties for any of the reasons specified in DB_S_ERRORSOCCURRED, except the reason that states that it was not possible to set the property." Everything after the comma should be deleted in this sentence. Any of the errors that cause DB_S_ERRORSOCCURRED for a property set with a dwOptions value of DBPROPOPTIONS_OPTIONAL will cause DB_E_ERRORSOCCURRED if set with a dwOptions value of DBPROPOPTIONS_REQUIRED.

5.1.16 Case Sensitivity of TableID and ColumnID

OLE DB consumers should treat table and column names within DBIDs as case- sensitive. This is not clear in the documentation.

5.1.17 hChapter Value for Nonchaptered Rowsets

When calling a method that takes hChapter as an argument, consumers should set this value to NULL when working with a nonchaptered rowset. Although nonchaptered providers should ignore this value, passing NULL improves interoperability between chaptered and nonchaptered providers.

5.1.18 Wrong Definition of IRowsetFind in Appendix D

The function prototype for IRowsetFind in Appendix D of the OLE DB Programmer's Reference shows the seventh argument as a Boolean argument named fSkipStartRow. This argument should actually be a LONG named lRowsOffset, as it appears elsewhere in the specification.

5.1.19 Requesting a View Object When Opening a Rowset

The OLE DB 1.5 specification added support for applying simple filters and sorts through views. A View object may be returned in place of a rowset when executing any rowset-returning method. To do this, the consumer requests one of the following interfaces as the RIID argument specified in the method that creates the rowset, or requests the associated property from the View property group when opening the rowset:


Requesting any other method when opening the rowset, and not explicitly requesting an interface from the View property group, returns the Rowset object, regardless of other properties specified on the rowset.

5.1.20 Error in Definition of View Properties

Each of the following properties is incorrectly listed as a property belonging to the Rowset or DataSource property group. All four of these properties should actually appear only as View properties:


5.1.21 Incorrect Column Order Listed for FOREIGN_KEYS Schema Rowset

The FOREIGN_KEYS rowset contains the following columns in this order:


The last three are out of order in the Programmer's Reference. Default Sort Order: FK_TABLE_CATALOG, FK_TABLE_SCHEMA, FK_TABLE_NAME

5.1.22 Escaping Special Characters in OLE DB 2.0

OLE DB 1.x defined literal values that the provider can use to report characters for escaping percent and underscore characters in a LIKE predicate. OLE DB 2.0 added the ability for a provider to report suffix characters for escaping percent and underscore in a LIKE predicate by renaming DBLITERAL_ESCAPE_PERCENT to DBLITERAL_ESCAPE_PERCENT_PREFIX and DBLITERAL_ESCAPE_UNDERSCORE to DBLITERAL_ESCAPE_UNDERSCORE_PREFIX, and by adding DBLITERAL_ESCAPE_PERCENT_SUFFIX and DBLITERAL_ESCAPE_ UNDERSCORE_SUFFIX. However, the header files were never updated to take into account this change, so version 2.0 providers and consumers must continue to use DBLITERAL_ESCAPE_PERCENT and DBLITERAL_ESCAPE_UNDERSCORE to return the escape characters. There is no way in OLE DB 2.0 to return suffix escape characters. This is consistent with ANSI-SQL92.

5.2 Data Conversion Issues

5.2.1 Scale Ignored for DBTYPE_DECIMAL

The OLE DB documentation states that conversions to decimal values within the consumer's buffers should follow the scale specified in the accessor's binding structure. Be aware that the data conversion library provided in the SDK ignores the scale value when doing conversions to DBTYPE_DECIMAL.

5.2.2 International Data and OLE DB Data Conversions

With OLE DB, most data conversions are independent of locale by spec. In the OLE DB Programmer's Reference, Appendix A, under "Conversions Involving Strings," formats are defined for conversion to and from strings. For example, "...cccc.cccc" is for DBTYPE_CY and "yyyy-mm-dd" for DBTYPE_DBDATE. These formats are independent of locale, which is consistent with the ISO definition of string values representing dates. It is up to the application to convert from the predefined string format to the local string format, if that local format is preferred.

Additional query words: kboledb200

Version           : WINDOWS:2.0
Platform          : WINDOWS
Issue type        : kbinfo

Last Reviewed: August 26, 1998