Oracle® Database User's Guide 10g Release 2 (10.2) for IBM z/OS (OS/390) Part Number B25396-01 |
|
|
View PDF |
If you are writing SQL, PL/SQL, or Oracle server Java that will run on a z/OS Oracle database server, most of what you do will be exactly the same as for any other Oracle platform. The few distinctions that exist are in areas that should be no surprise: character data (EBCDIC versus ASCII) is one, and access to operating system files (or z/OS data sets) is another. Here we explain aspects of these areas that might influence your design or coding.
This chapter contains the following sections:
An Oracle database has an associated character set, an attribute that is specified (or defaulted) when the database is first created, on the CREATE DATABASE statement. Application data in character-type table columns (such as CHAR and VARCHAR.) and database object identifiers (for example, table and column names) are stored using this character set. On a given Oracle platform, all supported database character sets are from the same family: either ASCII or EBCDIC. On z/OS, of course, the EBCDIC family is used. The default Oracle database character set on z/OS is what IBM calls EBCDIC codepage 1047, which Oracle identifies as WE8EBCDIC1047. Other popular EBCDIC codepages are available and can be used on z/OS. Refer to the Oracle Database Globalization Support Guide for additional information on database character sets.
Differences between EBCDIC and ASCII become noticeable in certain types of SQL query. The most obvious is the use of ORDER BY or GROUP BY on character values in a SELECT to return results in a particular order. EBCDIC and ASCII have different sort order for various groups of characters. For example, decimal digits (0-9) sort before all letters in ASCII, but after all letters in EBCDIC. Upper case letters sort before lower case in ASCII, but after lower case in EBCDIC. There are other differences in the sorting of punctuation and other symbols.
The consequence of these differences is that, given identical table data, a SELECT with an ORDER BY or GROUP BY clause involving character data can return results from a z/OS Oracle server in a different order than an ASCII-based Oracle server. In most cases this is not a problem and in fact is exactly what you want. When it is not, you can use the SQL function NLSSORT or the Oracle server parameter NLS_SORT (which can be set at the session level) to specify a different collating sequence for ORDER BY or GROUP BY processing. Both NLSSORT and NLS_SORT are discussed in more detail in the Oracle Database Globalization Support Guide.
A more subtle difference-one that sometimes affects applications that are migrated from an ASCII server to EBCDIC or vice versa-concerns the use of ordinal comparisons on character data in the WHERE clause of a SQL statement. Consider this SELECT statement:
SELECT ACCT_NUM, BALANCE FROM LEDGER_ACTIVE WHERE CATEGORY > 'BR10' AND CATEGORY < '99AX'
We'll assume that CATEGORY is a CHAR(4) column containing an encoded value comprising both digits and letters. This SELECT might work fine with an EBCDIC server but it will never return any rows from an ASCII server because in ASCII, digits are numerically lower than letters. (There is no ASCII character string that is greater than 'BR10' and less than '99AX'.)
Similar issues surround the use of ordinal operators and character data in PL/SQL and Java stored procedures. If you are writing applications that will be migrated between EBCDIC and ASCII Oracle servers, you must avoid this kind of construct or be prepared to change the application when migrating.
Partitioned tables are divided into multiple physically-separated groups of rows according to data criteria. If you partition a table by range of values in a character table column, you are establishing an ordinal comparison on rows that are inserted into the table. As with the SELECT…WHERE clause described in the prior section, it is possible to define partition ranges that work in EBCDIC but not ASCII, or vice versa. Care must be taken when designing a range-partitioned table that you expect to move between a z/OS Oracle server and one on an ASCII platform. Refer to the Oracle Database Administrator's Guide for details on creating and using partitioned tables. Refer to the discussion of limitations on the Oracle Export/Import utilities when processing range partitioned tables, in Chapter 4, "Oracle Tools and Utilities on z/OS".
Most of the characters used as syntax elements in SQL translate readily between ASCII and EBCDIC, making it easy to move SQL between z/OS and other platforms. One character that sometimes causes trouble is the vertical bar, used in the SQL concatenation operator. There are two different vertical bars in some EBCDIC codepages, including the default Oracle code page on z/OS (1047). Oracle for z/OS accepts the solid vertical bar (hexadecimal 4F in most codepages) as the SQL concatenation operator; the split vertical bar (hexadecimal 6A) is not accepted.
Some file transport mechanisms (for example, some versions of FTP) may translate an ASCII vertical bar to codepoint 6A when moving a file to an EBCDIC system. There may also be terminal emulators that produce codepoint 6A in a z/OS session when the vertical bar key is pressed. This can result in invalid SQL statements for a z/OS Oracle server. One way to avoid this is to use the keyword function CONCAT in SQL statements instead of the vertical bar. CONCAT provides exactly the same functionality as vertical bar concatenation. For additional information on CONCAT refer to the Oracle Database SQL Reference.
Before reading this, you should be familiar with the file name and file processing concepts and features discussed in Chapter 2.
Much of the file processing in the Oracle server concerns the files in which the Oracle database is stored. These files, which are VSAM Linear (LDS) clusters on z/OS, are a database administration concern and are not normally referenced directly by applications. Considerations for these files are covered in the Oracle Database System Administration Guide for IBM z/OS (OS/390).
There are, however, file access mechanisms in the Oracle server that are used directly by applications. The PL/SQL UTL_FILE package, Java file I/O, external LOBs, external tables, and Datapump Export and Import are capable of accessing more or less arbitrary files external to the Oracle database. When an application uses these features, file accesses are attempted on the application's behalf by the Oracle server address space. Here we cover considerations for these features that are specific to z/OS.
Relative to file processing concepts described in Chapter 2, two key points govern file processing behavior in the server:
The POSIX indicator in a z/OS client application, tool, or utility does not affect processing in the Oracle server. How filespecs are interpreted in the server (particularly ambiguous filespecs, as defined in Chapter 2) depends on the particular server feature, not on whether the client is running in TSO/batch or a shell. Some server features are limited to accessing HFS files only or data sets only, while others are capable of accessing both, distinguished by filespec syntax.
No default prefixing or directory path applies to file names in the server. There is no implicit z/OS userid prefixing of data set names, and there is no concept of a current working directory for HFS file references.
Both Oracle security and native z/OS security are involved when Oracle accesses a file on behalf of your application. Oracle security typically involves specific database objects and mechanisms: for example, in order to access a file using the external table feature, the Oracle userid involved must be granted appropriate access rights on the database directory object associated with the file. This check occurs entirely within the Oracle database server.
When the server actually opens the file, z/OS security checks occur. These checks are based on the z/OS userid associated with the server address space, not with an Oracle userid nor with the z/OS userid of your application job or session (if it is running on z/OS). One z/OS userid is associated with an Oracle server running on z/OS; this usually is set up by your system or security administrator.
The exact nature of the native z/OS security check depends on the kind of file being opened and on how the z/OS security component (for example, RACF or a comparable product) is configured. Opening a data set performs the normal data set name-based check for the requested mode of access (generally either read or write). Opening an HFS file performs a check to enforce the permission settings associated with the HFS directory and the file.
In short, both types of security check-Oracle and native z/OS-must succeed in order for an application's server file access operation to proceed. If the Oracle check fails, a specific Oracle error for that condition is issued. If the native z/OS check fails, it generally is reported as an error in a "file open" function with varying amounts of accompanying detail. z/OS error reporting details are discussed in each of the sections that follow.
Several of the file access features discussed in this section depend on the database directory object as a security and control mechanism. Accessing a file with these features requires both a directory object identifier and a file name; the directory object must exist in the database and the Oracle userid must have been granted appropriate access rights on the directory object. Creating directory objects and granting access to them is usually a database administrator or security administrator responsibility.
Associated with each directory object is a character string that relates to the OS file system. In a UNIX Oracle server, this string identifies a hierarchical file system (HFS) path. In z/OS, a directory object can identify either an HFS path or the leading (leftmost) portion of a data set name filespec. The former is used when accessing HFS files and the latter when accessing a data set. Not all Oracle features that use directory objects are capable of accessing both HFS files and data sets; refer to the following sections for specific feature details.
Because of the dual interpretation of directory objects on z/OS, the character string associated with the object must be unambiguous as to type, as that term is defined in Chapter 2. If it represents an HFS path, it must begin with a single forward slash followed by zero or more HFS directory names, each ending with a slash. (The final ending slash is not required.) If the Directory represents high level qualifiers of a data set name, it must begin with exactly two forward slashes followed by one or more qualifiers, each ending with a period. The final ending period is not required; the string is assumed to contain whole data set name qualifiers, and an intervening period is added if needed when the string is used to compose a complete filespec. Including a leading single quote (apostrophe) between the slashes and the high level qualifier, signifying no userid prefixing, is permitted but not required since no prefixing is done in the server.
A Directory defined with HFS syntax can be used only to access files in the z/OS POSIX HFS environment. One defined with data set syntax can be used only to access data sets.
The following example shows CREATE DIRECTORY statements for HFS file and data set access in turn:
CREATE DIRECTORY APP1DIR AS '/oracle/dev/app1'; CREATE DIRECTORY APP2DIR AS '//JSMITH.APP2';
The UTL_FILE PL/SQL package allows you to read and write arbitrary flat files containing text data. On z/OS, UTL_FILE can access both HFS files and data sets, with data set access subject to limitations described in this section.
Prior to Oracle 10G, UTL_FILE relied on an Oracle server initialization parameter, UTL_FILE_DIR, to indicate which HFS directories could be accessed by the package, and data set access was not supported on z/OS. Starting in Oracle 10G, UTL_FILE uses the database directory object as the control mechanism. The UTL_FILE_DIR parameter is still supported, but Oracle recommends using directory objects instead. In order to access a data set with UTL_FILE, you must use a directory object defined as described in the previous section.
UTL_FILE data set support is limited to sequential (DSORG=PS) and partitioned (DSORG=PO) organizations with record formats F, FB, V, and VB. Partitioned data set members can be accessed for input only: opening a PDS member for output is not supported. A given data set can be opened for input by multiple user sessions across multiple Oracle servers, or be opened for output by a single user session in one server. An open for output must specify a new (non-existent) data set; opening an existing data set for output is not supported.
When you access a data set with UTL_FILE, the filename argument passed to the FOPEN function is the trailing (right-hand) portion of a data set name. When opening for input, this can include a PDS member name in parentheses. Both the data set name and member name are case-insensitive. No other filespec syntax elements (such as leading slashes or surrounding apostrophes) are permitted in the filename. The filename is appended to the directory object string (with an intervening period and closing apostrophe if necessary) to form the complete filespec to be opened. Only data set name filespecs can be formed with this mechanism: SYSOUT and DD filespecs are not supported by UTL_FILE.
When accessing data sets, only the following UTL_FILE package functions are supported:
FCLOSE
FCLOSE_ALL
FFLUSH (This is accepted for data sets, but has no effect. Buffer flush occurs as needed and when the file is closed.)
FOPEN (Only modes "r" and "w" are accepted.)
GET_LINE
IS_OPEN
NEW_LINE
PUT
PUTF
PUT_LINE
In the following example, a SQL*Plus batch job is used to create a data set directory object, grant read access to user SCOTT, and then has SCOTT use the directory object to access a member of a PDS from an anonymous PL/SQL script. The PDS member that is accessed is JSMITH.TEST.CNTL(DATA1):
//PLUS EXEC PGM=SQLPLUS,REGION=0M,PARM='/nolog' //STEPLIB DD DISP=SHR,DSN=ORACLE.V10.CMDLOAD //ORA$LIB DD DISP=SHR,DSN=ORACLE.V10.MESG //ORA@ORA1 DD DUMMY //SYSIN DD * whenever oserror exit 8 whenever sqlerror exit 8 connect dbauser/chuckle create directory testdir as '//JSMITH.'; grant read on directory testdir to scott; connect scott/tiger set serveroutput on size 1000000 declare fd utl_file.file_type; buf varchar2(1024); begin fd := utl_file.fopen('TESTDIR', 'test.cntl(data1)', 'r'); begin loop utl_file.get_line(fd, buf, 1024); dbms_output.put_line(buf); end loop; exception when no_data_found then dbms_output.put_line('---EOF reached.'); when others then dbms_output.put_line('some other read error'); end; utl_file.fclose(fd); exception when others then dbms_output.put_line('some fopen error'); end; / exit /* //
Java provides various mechanisms for accessing arbitrary files, such as the java.io
package. When you use Oracle Database Java, your Java procedures are processed on the server. On z/OS, all file accesses from Oracle Database Java procedures are treated as HFS file accesses. Refer to the Oracle Database Java Developer's Guide for more information on using Oracle Database Java.
External LOBs are database large objects whose data resides in an external (non-database) file on the server system. Applications can access LOB data (both internal and external) using the PL/SQL DBMS_LOB package, OCI calls, and certain constructs in precompiler programs. Access to an external (file) LOB uses a BFILE type. The key identifying components of a BFILE are a database directory object name and the external file name, both supplied using the BFILENAME function. Refer to the Oracle Database Application Developer's Guide - Large Objects for more information on external and internal LOBs and BFILE processing.
On z/OS, files accessed as external LOBs must be HFS files. Although directory objects on z/OS are capable of designating a data set name prefix, only directory objects that specify an HFS directory can be used to identify a BFILE.
External tables are Oracle database tables that have their row data in an external (non-database) location. Oracle provides two different access drivers to process external tables: the ORACLE_LOADER access driver and the ORACLE_DATAPUMP access driver.
The ORACLE_LOADER access driver functions similarly to SQL*Loader, enabling you to load normal Oracle database tables with data sourced from an external file. The ORACLE_DATAPUMP access driver facilitates moving Oracle table definitions and data to an external file for importing into another Oracle database.
The external tables feature and both drivers are integrated into the database server as extensions to the CREATE TABLE SQL statement. Thus, the external tables feature is readily available through any program that issues Oracle SQL, including SQL*Plus as well as user-written applications. Refer to Oracle Database Utilities for more information on external tables and the associated access drivers.
On z/OS, all files accessed by an Oracle database server as part of the external tables feature must be HFS files. This includes any existing file you plan to use as a row source for an ORACLE_LOADER external table as well as all files written by the feature including log files, bad, and discard files. Like most other server file features, external tables use a database directory object as the locus of security. Although directory objects on Oracle Database for z/OS are capable of designating a data set name prefix, only directory objects that specify an HFS directory can be used with external tables. If you need to load Oracle database tables from a z/OS data set, you must use the regular SQL*Loader utility.
Both Datapump Export and Import, described in Chapter 4, "Oracle Tools and Utilities on z/OS", and the DBMS_DATAPUMP PL/SQL package are limited to processing HFS files on z/OS. Unique to z/OS, the file name values supplied by Datapump clients to an Oracle database server must begin with a "./" (dot-slash) prefix so they are clearly distinguished as HFS names. In addition, the directory object used in any z/OS Datapump operation must specify a valid HFS directory path.