Pages

Sunday, December 27, 2020

Create/Write Data to IFS file from SQL - IBM i

IFS Stream File

IFS (Integrated File System) stream files are major part of any (or most of) IBM i applications. 

There are various ways to create stream files like CPYFRMIMPF, CPYFRMSTMF, EDTF...

Creating the IFS file and writing the data has been made much easier with SQL procedure IFS_WRITE (IFS_WRITE_BINARY for writing the data in Binary format and IFS_WRITE_UTF8 for writing the data in UTF8 format). 

In this post, we will see how to write the data into Stream file directly from SQL using IFS_WRITE procedure. 

E.g.: 

To start with, What we need is 'Path Name (IFS Directory & File Name)' and 'Data to be written'. 

Write data into IFS file from SQL on IBM i

  • First parameter 'PATH_NAME' accepts the IFS path name along with the file name.
    • In this example, '/home/REDDYP' is the directory and 'NewTextFile.txt' is the file name. 
    • Directory should be present by the time the procedure is called. If not, Procedure call would be failed. 
    • File name doesn't necessarily need to be present. If the is not present, File would be created. If already present, Data will appended to the file by default.
  • Second parameter 'LINE' accepts the data that is to be written into the file. 
In this example, file 'NewTextFile.txt' isn't already present. So, the file is created once the procedure is run. 

Write data into IFS file from SQL on IBM i

This looks fine when we look at the file from 5250 session. But, if we download the file and open in the Text editor, it appears like the below.

Write data into IFS file from SQL on IBM i

The reason behind this is CCSID used. By default IFS_WRITE creates the file with Job's default CCSID. If we need the file to be created with specific CCSID, we need to pass this using 'FILE_CCSID' parameter. 

Write data into IFS file from SQL on IBM i

  • Parameter 'FILE_CCSID' accepts an integer value (CCSID) and will be used to create the stream file.
    • This is only used when creating the new stream file and has no effect if the file is already present. 
    • Default value for this parameter is '0' for IFS_WRITE. File will be created with Job's CCSID if '0' is specified (or default).
    • Default value for this parameter is '1208' for IFS_WRITE_UTF8 and '0' for IFS_WRITE_BINARY.
Write data into IFS file from SQL on IBM i

This looks fine from 5250 session, let's see how this would look like if we download the file. 

Write data into IFS file from SQL on IBM i

So, If we need to create the file with the data (and replace if the file already exist), we can make use of the parameter 'OVERWRITE'. 

Write data into IFS file from SQL on IBM i

  • Parameter 'OVERWRITE' determines how the data passed (in LINE parameter) to be added to the file. Below are the valid values for this parameter.
    • APPEND - This is the default value for this parameter. Data is appended if the file already exist and File would be created if not already exist.
    • REPLACE - Replace the data in the existing stream file with the data passed. If file isn't already exist, file would be created. 
    • NONE - This is to be used if the new file needs to be created always. Procedure will fail if the file already present and new file would be created if the file is not present.

Write data into IFS file from SQL on IBM i

Passing 'APPEND' parameter or not passing this parameter would append the data to the file.

Write data into IFS file from SQL on IBM i

In the above example, New line has been written with the data passed. If we need to append the data on to the same line rather than creating new line, we can make use of 'END_OF_LINE' parameter. However, this parameter needs to be used for the line which needs data is added on to (rather than in the query that needs to be added). 

E.g.: 

If the data needs to be appended to Third Record, END_OF_LINE parameter needs to be passed with 'NONE' while writing this record. So that following line can be appended to the same record. 

Write data into IFS file from SQL on IBM i

  • Parameter 'END_OF_LINE' determines what needs to be written at the end of the line. Below are the valid values for this parameter.
    • CRLF - This is the default value for IFS_WRITE (and IFS_WRITE_UTF8). Carriage return and Line feed are appended to the end of the line.
    • CR - Carriage return is appended to the end of the line.
    • LFCR - Line feed and Carriage return are appended to the end of the line. 
    • LF - Line feed is appended to the end of the line.
    • NONE - No end of the line characters are appended i.e., Data appended will be added to the same record. This is the default value for IFS_WRITE_BINARY.
Write data into IFS file from SQL - IBM i

When the IFS_WRITE is called next time data is appended to the same line. 

Write data into IFS file from SQL on IBM i

Below is the Syntax of this procedure with all the parameters. 

CALL QSYS2/IFS_WRITE
     (PATH_NAME => 'IFS Path with directory and File Name', 
      LINE => 'Data to be written',
      FILE_CCSID => CCSID(Integer),
      OVERWRITE => 'Overwrite Option',
      END_OF_LINE => 'End of Line Option') 

All the above examples showing the data line by line into the file. Let's see how we can write the data from a file into the Stream file using IFS_WRITE. This would a bit of coding in SQL. 

Write data into IFS file from SQL on IBM i

In the above query, we are calling IFS_WRITE twice. Once to create the empty file (or replace with blank if file is already present) and Second time, it is called inside FOR loop to write each line from Table. Let's see each of the parameter in detail.

  • Line - 4: Parameter 'LINE' with no data is to create the blank file. Next time IFS_WRITE is called is in the loop, so it is better to create the blank file first and append the data. 
  • Line - 5: Parameter 'OVERWRITE' with 'REPLACE', This is to replace any data in the current file if the file is already present. 
  • Line - 6: Parameter 'END_OF_LINE' with 'NONE', This is to append the data from the first line.
  • Line - 9: FOR loop with SELECT query of table 'REDDYP1/IFSFILE'.
  • Line - 10: IFS_WRITE with data to be written. This procedure is called for each row in the table and the data is added to the new row (with default value on OVERWRITE).
IFS_WRITE_BINARY and IFS_WRITE_UTF8 works in the same way only difference is data to be passed in Binary and UTF8 formats respectively. 


If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Saturday, December 26, 2020

Read data in IFS file from SQL - IBM i

IFS Stream File


IFS (Integrated File System) Stream files are major part of any (or most of) IBM i applications. Having to read data from the Stream files become necessary. 

We can read the IFS file from Programs in two different ways. 
  • By copying the data from IFS file to Database file (CPYFRMSTMF or CPYFRMIMPF).
  • By using 'open' & 'read' procedures written in 'C' language. Click Here to see more about how to read IFS file using these procedures from RPGLE.

One other way to do this is by using SQL table function 'IFS_READ'. There are couple of more table functions 'IFS_READ_BINARY' and 'IFS_READ_UTF8' to read the data and return in Binary and UTF8 formats accordingly. 

In this post, we will see how to read IFS file using 'IFS_READ' and little about the other two table functions towards the end.

E.g.: 

Read IFS file from SQL

Above query is simple and straight forward, All we need to do is pass the required file name (along with full path) against parameter 'PATH_NAME'.

Read IFS file from SQL

This table function has got only two columns. 
  • 'LINE_NUMBER' - Indicates the position of the line in file.
  • 'LINE' - Actual data in the file.

Let's say if there is a limit on the number of characters to receive, We can control this using MAXIMUM_LINE_LENGTH parameter. 

Read IFS file from SQL

  • Parameter 'MAXIMUM_LINE_LENGTH' accepts numeric values.
Read IFS file from SQL

If we look at the above result, each row contains only 12 digits and any additional characters has been returned as new line. 

Line number 1, 4 & 7 has length of 12 digits and has returned only one row. Line numbers 2 & 6 are exceeding 12 digits, so only 12 digits are returned and the rest are returned as new lines (3 & 6).

There is another important parameter 'END_OF_LINE' this is used to determine the end of line in Stream file. Let's have a look at how it makes difference before we see the list of valid values. 

Read IFS file in SQL

In the above query, 'NONE' is passed as 'END_OF_LINE', Query will not consider any End of Line characters. 

Read IFS file in SQL

All the records has been returned in a single row. Number of digits allowed in a single line can be controlled using 'MAXIMUM_LINE_LENGTH'. If nothing is specified, by default 2GB is returned in a single line and the next is returned in a new line. 

Below are the possible options for END_OF_LINE parameter.
  • CRLF - A Carriage return and Line feed indicate End of line.
  • LFCR - A Line feed and Carriage return indicate End of line.
  • CR - A Carriage return indicate End of line.
  • LF - A Line feed indicate End of line.
  • ANY - Any of the above four indicate End of line (Default for IFS_READ & IFS_READ_UTF8).
  • NONE - No End of line characters are considered (Default and only allowed value for IFS_READ_BINARY). 

Let's see a simple example of how to get the data in Binary format using IFS_READ_BINARY. 

Read IFS file in Binary mode from SQL

Parameters are same as IFS_READ except 'END_OF_LINE' which always accepts 'NONE'. This implies that IFS_READ_BINARY always returns a single row (MAXIMUM_LINE_LENGTH restrictions apply).

Read IFS file from SQL in Binary format

In the same way, data can be retrieved in UTF8 format using IFS_READ_UTF8. 

Below is the syntax of IFS_READ with all the parameters.

SELECT * FROM
         TABLE(QSYS2/IFS_READ(
               PATH_NAME => 'ifs_path_name',
               MAXIMUM_LINE_LENGTH => maximum_length(in decimal),
               END_OF_LINE => 'end_of_line_characters'))

In all of the examples shared above returns a valid data. So, What happens if an incorrect path is passed? No data is received and query is completed with SQL Code '100'. 


If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Monday, December 21, 2020

Retrieve Objects present in IFS Directory from SQL - IBM i

Objects in IFS Directory


Working with IFS (Integrated File System) is essential and almost part of day to day work for most of the IBM i developers.  

In my experience below are the couple tasks I had to do frequently. 

  • Check if an object is present in IFS directory. 
  • Retrieve list of objects present in IFS directory. 

This has been made easier to do these now (and much more) from SQL directly with 'IFS_OBJECT_STATISTICS' table function. 

What does IFS_OBJECT_STATISTICS table function return? It returns the attributes of an IFS Object for every object (including directory) present in the IFS Path provided. 

Let's now have a look at how to retrieve the objects present for a specified path. 

Retrieve Objects present in IFS directory

In the above query,
  • Line - 1: Column 'PATH_NAME' returns the IFS Path (or Object) the row is returned for. 
  • Line - 2: Column 'OBJECT_TYPE' returns the type of object retrieved. 
  • Line - 3: Column 'CREATE_TIMESTAMP' returns the timestamp when the object was created. 
  • Line - 5: 'IFS_OBJECT_STATISTICS' is the table function that retrieves the data. 
  • Line - 6: Parameter 'START_PATH_NAME' is to pass the IFS Directory or Object Name. 

Retrieve Objects present in IFS directory

Looking at the above result, 
  • PATH_NAME contains,
    • Path passed in the parameter 'home/REDDYP'.
    • All Stream files under directory 'home/REDDYP' (accessibmi.py, Sample1.csv etc).
    • Sub Directories under the path passed (/home/REDDYP/SubDirectory).
    • Files present in the Subdirectory (/home/REDDYP/SubDirectory/Sample_Textfile.txt). 
  • OBJECT_TYPE contains the type of the Object returned i.e., *DIR (for directory), *STMF (stream file) etc. 
  • CREATE_TIMESTAMP contains the timestamp the Object or Directory is created on.
  • There are many other columns IFS_OBJECT_STATISTICS would return. You can explore the full list of columns by running the query with '*' instead of column names. 

The above query returned sub directories and objects inside sub directory. It is helpful to know all the objects present in the sub directories as well if there aren't many sub directories and objects in sub directories. 

With the use of 'SUBTREE_DIRECTORIES' parameter, we can control if we need to display the objects of sub directory or not. This would become helpful if there are many objects and sub directories. 

Retrieve Objects present in IFS Directory

In the above query, Additional parameter SUBTREE_DIRECTORIES accepts two parameters.
  • YES (Default) - This would retrieve all the objects inside Sub Directory.
  • NO - This would not retrieve the objects inside Sub Directory (Would list any Sub Directory under the path specified).
Retrieve Objects present in IFS Directory

There is another important parameter I would like to mention is 'OBJECT_TYPE_LIST'. This parameter can be used to verify or retrieve any the IFS Objects of specific type. 

In the above example, We can see there are two different object types *DIR and *STMF. Let's say If I'm only interested in *STMF and not the directories. then this parameter is very much helpful. 

Retrieve Objects present in IFS Directory

Above query would only return objects of type '*STMF'. So, what if we need to select from multiple types? All the required object types can be specified by separating with space.

OBJECT_TYPE_LIST => '*STMF *DIR'

This parameter accepts some special values to indicate group of object types along with specific object types. 
  • *ALLDIR - All directory types (*LIB, *DIR, *FLR, *FILE and *DDIR)
  • *ALLSTMF - All stream file types (*MBR, *DOC, *STMF, *DSTMF and *USRSPC)
  • *MBR - All database file member types.
  • *NOQDLS - All object types except QDLS object types.
  • *NOQOPT - All object types except QOPT and QNTC object types.
  • *NOQSYS - All object types except QSYS.LIB object types.

There are couple of more parameters and below is the brief about these parameters.

OMIT_LIST

List of path names to exclude from processing. All objects and sub directories under this path would be excluded as well. Multiple path names can be specified by separating with blanks. There are couple of exceptions to this.
  • If there are spaces exist in the path name then path name to be enclosed in apostrophes or quotes. 
    OMIT_LIST => '/home/REDDYP/Sub Directory'
  • If there is an apostrophes or quotes in the path name, the embedded character needs to be doubled. 
       OMIT_LIST => '"/home/REDDYP/Memeber''s Data" "/home/REDDYP/Sub Directory"'

       In the above example,
    • First path '/home/REDDYP/Memeber's Data' has apostrophe. So, this needs to be doubled (Member''s Data)
    • Both the paths are enclosed in quotes. Using the quotes is the best practice when dealing with multiple paths, it is easy to understand and maintain.

IGNORE_ERRORS

This is more of a error handling, Advising the system on what to do in case of an error while executing the query. 
  • NO - An error is returned.
  • YES (default) - A warning is returned and no data is returned when error is occurred. 

Query looks like below if we use all the parameters.

SELECT * FROM TABLE(IFS_OBJECT_STATISTICS(
                    START_PATH_NAME => 'Start Path Name'
                    SUBTREE_DIRECTORIES => 'YES/NO',
                    OBJECT_TYPE_LIST => 'List of Object Types',
                    OMIT_LIST => 'Paths to be Omitted',
                    IGNORE_ERRORS => 'YES/NO'))  

There is one thing I haven't mentioned is "How to check if an object is present in IFS directory?" Well, this isn't the actual purpose of the function. But, we can achieve this with a query like the below.

Check if an object is present in IFS directory

This query returns '1' if the object (/home/REDDYP/hello.py) is present and returns '0' if not present. 


If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Wednesday, December 16, 2020

Working with Data Queues in SQL - IBM i

Data Queues

Data Queues (DTAQ) has been very useful for communication between different Jobs on IBM i.

There have been various APIs to help working with data queues by calling these from CL or RPGLE programs. 

With the use of SQL Procedures and Functions, this has been made easy to work with Data Queues from SQL without having to write a program to call APIs. 

We can now create data queues, send data to data queue and receive data and more. Let's have a look.

  • Create Data Queue.
  • Retrieve Data Queue info.
  • Send data to Data Queue. 
  • Receive data from Data Queue.
  • Retrieve data from Data Queue.
  • Clear Data Queue.

Create Data Queue:

Create Data Queue (CRTDTAQ) is more of an interactive command and best suited to execute from either command line or CL program. 

Sometimes, we get to create Data Queues in run time. QCMDEXC is extremely useful in such cases. QCMDEXC API is useful for running commands from programs and QCMDEXC procedure is useful for running commands from SQL. 

Data Queue can be created in SQL procedure in run time as below. 

Run CL commands from SQL

Or, even from a single SQL statement. 

Run CL commands in SQL

Read more about running CL commands in SQL

Retrieve Data Queue info:

Checking the attributes of Data Queue becomes essential when investigating problems around data queues. Or, If we just need to know all the Data Queues that are present in a specific library and more. 

Whatever is the requirement, 'DATA_QUEUE_INFO' view becomes handy to retrieve the info quickly through a single SQL statement. 


Send data to Data Queue:

Data can be sent to Data Queue with a single SQL statement without having to write code to call the API (QSNDDTAQ). 

And, data can be sent to Data Queue in different formats (Character, UTF8 and Binary). There are different procedures to send the data to data queue in each format. 

'SEND_DATA_QUEUE' procedure can be used to send the data to Data Queue in character format. 

Send data to Data Queue in SQL

Read more about Sending data to Data Queues in different formats in SQL. 

Receive data from Data Queue: 

Receiving data from Data Queue using 'RECEIVE_DATA_QUEUE' table function returns the data in different formats (Character, UTF8 and Binary). 

This gives the choice to the programmer to get the data directly in the required format. Just simply choose the column you need. 

One other advantage of using this function is, Data can be received without having to delete from the Queue. 

Receive data from Data Queue

Read more about Receiving data from Data Queues and how to Receive without removing the data from the Queue.

Retrieve data from Data Queue:

Checking what data is present in Data Queue becomes essential when investigating problems around data queues. Or, We may just need to retrieve the data from Data Queue without removing from the Queue. 

'DATA_QUEUE_ENTRIES' becomes extremely useful in these cases. Like 'RECEIVE_DATA_QUEUE', This function would return the data in different formats. 

Retrieve data from Data Queue

Read more about Retrieve data from Data Queue and different selection criteria we can apply. 

Clear Data Queue:

Clearing data from Data Queue from SQL has been made easy with procedure 'CLEAR_DATA_QUEUE'. 

Clear data from Data Queue

Read more about Clearing data from Data Queue and how we can clear a set of data. 


If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Sunday, December 13, 2020

Retrieve Data Queue Info from SQL - IBM i

Retrieve Data Queue Info

Data Queues (DTAQ) has been very useful for communication between different Jobs on IBM i. 

Sometimes it becomes essentials to see what are the parameters Data Queue is created with. Retrieve Data Queue Description (QMHQRDQD) API has been very much useful to retrieve this information. 

We can now retrieve this information directly from SQL with a Query from file 'DATA_QUEUE_INFO' (System Name: DTAQ_INFO). This function is to retrieve the Data Queue attribute and not the data in Data Queue. Have a look at Retrieve data from Data Queue from SQL if you are interested in retrieving data from Data Queue.

This makes it easier to retrieve the data queue attributes and this is like running SELECT query on a table. 

This would help us with most of the common questions around Data Queue attributes like,
  • How many messages are there in the Data Queue? use column CURRENT_MESSAGES.
  • Is the Data Queue is Keyed or What is the Sequence? use column SEQUENCE.
  • What is the length of Key data? use column KEY_LENGTH.
These are the few queries I had to check usually and have a look at the fields present in DTAQ_INFO for the full list of fields present.. 

Let's have a look at the sample query.

DATA_QUEUE_INFO

This would list all the data queues in library 'REDDYP1' with the list of columns selected.

DATA_QUEUE_INFO

Couple of points to note here.
  • One row would be returned for each Data Queue.
  • User should have sufficient authority for the Data Queue. 

If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Tuesday, December 8, 2020

Receive Data from Data Queue in SQL - IBM i

Receive Data from Data Queue

Data Queues (DTAQ) has been very useful for communication between different Jobs on IBM i. 

There are multiple APIs to work with Data Queues. These have been very much helpful to Send and/or Receive the Data from Data Queues by calling these APIs. 

Working with Data Queues has been made easier with SQL Procedures/Functions. In this post, we will see how to receive the data from Data Queues using SQL table function 'RECEIVE_DATA_QUEUE'. This would work similar to 'QRCVDTAQ' API.

Let's see how to receive data from a Standard Data Queue with no Key data. 

RECEIVE_DATA_QUEUE

In the above SQL, 

  • Line - 1: Column 'MESSAGE_DATA' would receive the data from Data Queue in Character format. 
  • Line - 2: Column 'MESSAGE_DATA_UTF8' would receive the data from Data Queue in UTF8 format.
  • Line - 3: Column 'MESSAGE_DATA_BINARY' would receive the data from Data Queue in Binary format. 
  • Line - 5: Parameter 'DATA_QUEUE', Name of the Data Queue to receive the data.
  • Line - 6: Parameter 'DATA_QUEUE_LIBRARY', Name of the Library Data Queue is present in. *LIBL and *CURLIB can be used instead based on where the Data Queue is present.
We don't need to explicitly mention the parameter names while running the SQL statement and can pass the parameters similar to the usual CALL statement.

RECEIVE_DATA_QUEUE

Data is received as below and the message is cleared from Data Queue. 

RECEIVE_DATA_QUEUE

Above query would remove the message from Data Queue as soon as the data is received. There is another parameter where we can specify if the message needs to be removed from the Data Queue or not. 

RECEIVE_DATA_QUEUE

  • Line - 7: Parameter 'REMOVE' can be used to specify if the message needs to be removed. 'NO' would not remove the message from DTAQ and 'YES' would remove the message from DTAQ.
For Keyed Data Queue, Key data and Key order are the mandatory parameters. 

RECEIVE_DATA_QUEUE

In the above SQL, 
  • Line - 2: column 'KEY_DATA' would receive the key data of the message received from Data Queue. 
  • Line - 7: Parameter 'KEY_DATA', Key data of the message that would need to be received from Data Queue. 
  • Line - 8: Parameter 'KEY_ORDER', Comparison criteria to be specified. Receive Data Queue procedure would compare the Key data passed with the Key data in the message. Below are the possible values for this parameter.
EQAll messages with Key equal to Key data are to be returned.
GEAll messages with Key greater than or equal to Key data are to be returned.
GTAll messages with Key greater than Key data are to be returned.
LEAll messages with Key less than or equal to Key data are to be returned.
LTAll messages with Key less than Key data are to be returned.
NEAll messages with Key not equal to Key data are to be returned.

One other important parameter to consider is 'Wait time'. This is applicable for both Keyed and Non-Keyed data queues.

RECEIVE_DATA_QUEUE

  • Line - 9: Parameter 'WAIT_TIME' is used to specify how much time to wait for if there is no message to receive from Data Queue. Valid values are from -99,999 to 99,999.
    • Any negative value would make the procedure to wait forever for the message to receive.
    • Zero would continue the processing immediately.
    • Any positive value would make the procedure wait for the number of seconds passed in this parameter. 


If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Friday, December 4, 2020

Send Data to Data Queue from SQL - IBM i

Send Data to Data Queue:

Data Queues (DTAQ) has been very useful for communication between different Jobs on IBM i. 

There are multiple APIs to work with Data Queues. These have been very much helpful to Send and/or Receive the Data from Data Queues by calling these APIs. 

Working with Data Queues has been made easier with SQL Procedures. In this post, we will see how to send the data to Data Queues using SQL procedures. 

There are three different procedures to do this. 
  • SEND_DATA_QUEUE (Input data to be sent in Character Format to DTAQ).
  • SEND_DATA_QUEUE_BINARY (Input data to be sent in Binary Format to DTAQ).
  • SEND_DATA_QUEUE_UTF8 (Input data to be sent in UTF8 Format to DTAQ). 
We will mainly focus on the 'SEND_DATA_QUEUE' in this post and look at the other two procedures towards the end. 

Let's see how to send a message to a Standard Data Queue with no Key. 

SEND_DATA_QUEUE

In the above call to Procedure, We are passing three parameters. 
  • MESSAGE_DATA - Message to be sent to Data Queue
  • DATA_QUEUE - Data Queue Name
  • DATA_QUEUE_LIBRARY - Library in which Data Queue is present. *LIBL and *CURLIB can also be used depending on where Data Queue is present.
If we notice, in the above Call statement, we are using the Parameter name followed by the Parameter value. This doesn't necessarily be the same and parameters can be passed similar to regular CALL statement.

SEND_DATA_QUEUE

Both of these should work the same way. Below are the messages currently present in the Data Queue (Click Here to see how to retrieve data from Data Queue from SQL). 

DATA_QUEUE_ENTRIES

Let's look at sending data to Keyed Data Queue. This would require additional parameter to be passed for Key Data.

SEND_DATA_QUEUE

  • KEY_DATA - Key data to be passed. Length of the Key data should match with the length specified while creating Data Queue. 
Let's now look at how the data is stored differently when we send data using 'SEND_DATA_QUEUE_UTF8' and 'SEND_DATA_QUEUE_BINARY'. 

BINARY:

SEND_DATA_QUEUE_BINARY


Parameters passed are similar to 'SEND_DATA_QUEUE' except the Message data is in Binary format. 

DATA_QUEUE_ENTRIES_BINARY

Similarly, for SEND_DATA_QUEUE_UTF8 we can send the message in UTF8 format with all other parameters same as above. 

In the next post we will see how to Receive the data from Data Queue from SQL. And, a bit more details on how the Binary/UTF8 formatted data is received. 

If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Tuesday, December 1, 2020

Clear Data Queue from SQL - IBM i

Clear Data Queue:


Data Queues (DTAQ) has been very useful for communication between different Jobs on IBM i. 

There are multiple APIs to work with Data Queues. In the last post we have seen How to Retrieve data from Data Queue using SQL

In this post, we will see how to clear data from Data Queue in SQL using 'CLEAR_DATA_QUEUE' Procedure in QSYS2 with some examples. This would work similar to API 'QCLRDTAQ'. 

Let's look at an example of clearing all the entries from a Standard Data Queue. 

E.g.: 

CLEAR_DATA_QUEUE

In the above statement, We are passing two parameters.
  • 1st Parameter: Data Queue - Name of the Data Queue that needs to be cleared. 
  • 2nd Parameter: Data Queue Library - Name of the Library Data Queue is present in. This parameter accepts special variables like '*LIBL' or '*CURLIB'. 
Let's look at another example of Keyed Data Queue and clearing only set of records based on the Key. 

E.g.: 

Before we actually clear the Data Queue, Below are the messages present in Data Queue. 

DATA_QUEUE_ENTRIES

Now, Let's clear the messages in the Data Queue with Key Data greater than '02'. This should clear the messages with Key data '03' and '04'. 

CLEAR_DATA_QUEUE

In the above statement, We are passing two additional parameters to control what data we clear based on the Key data. These parameters can only be specified for Keyed Data Queue.
  • 3rd Parameter: Key Data - String containing the Key value. This should be of same length defined while creating Data Queue.
  • 4th Parameter: Key Order - Comparison criteria to determine which messages to be cleared from Data Queue. Below are the possible values for this parameter.
EQAll messages with Key equal to Key data are to be returned.
GEAll messages with Key greater than or equal to Key data are to be returned.
GTAll messages with Key greater than Key data are to be returned.
LEAll messages with Key less than or equal to Key data are to be returned.
LTAll messages with Key less than Key data are to be returned.
NEAll messages with Key not equal to Key data are to be returned.

This has cleared the messages from Data Queue with Key data greater than '02'. 

DATA_QUEUE_ENTRIES

This procedure can be called from Interactive SQL, Run SQL scripts, Other SQL Procedures or Embedded SQL in RPG programs. 

If you have any Suggestions or Feedback, Please leave a comment below or use Contact Form.

Popular Posts