Gary's Oracle On Tap
A blog about Oracle technology, and the occasional mention of homebrew on tap.
Wednesday, April 10, 2024
Putting the You In User Groups: UTOUG Training Days 2024
Tuesday, January 24, 2023
Running Oracle database containers on Apple M1 using PODMAN
INTRODUCTION
In this blog we will talk about running Oracle database in a PODMAN container on Apple silicon (M1 / M2) under Mac OS X. As you might know, Oracle does not provide native Oracle database binaries for Apple silicon, which is a different architecture than the Intel / AMD x86. If you previously ran a Linux container under Docker on a X86 Intel based Mac, you will find a very different situation on a newer Apple M1 or M2 processor.
Docker for Mac OS X is still available and it should be able to emulate x86 automatically. I personally had some issues getting Oracle’s pre-built database containers to work under docker, and instead decided to look into using PODMAN.
I should mention, that I still find PODMAN somewhat buggy. It’s not clear to me if this is due to the QEMU emulation of x86 on Apple silicon, or Podman, or my specific setup or some combination. So be warned, I do see a number of times where I start the Oracle database container and it fails. Then I shutdown my PODMAN machine, and restart it and then the container works again. Generally once the container is running it has been solid.
Also, since x86 is emulated on the Apple silicon, it runs much slower than what I was used to on my previous Intel based MacBook using Docker. In my antidotal tests it runs about 70% slower. Note this is not a knock on Apple silicon, this is just the nature of the full processor emulation that is required.
REQUIREMENTS
So PODMAN has a slightly different architecture than Docker and is an open source product. This means that installing and using it will be based on setting up and using a number of open source items. I won’t cover all of them in these directions. You can find many links on how to install these items. You will need
- HomeBrew (https://brew.sh) or some method to install and run open source software
- QEMU (https://www.qemu.org) Install this through HomeBrew, this is the Open Source emulation software that will emulate the x86_64 (AMD64) processor. Note you will see just about everything referenced in the Open Source world as AMD64, as that is who built the original 64-bit instruction set that Intel uses today.
- Podman (https://podman.io) the container platform, you can install this using HomeBrew or you can directly download the packages and install them
- Podman desktop (optional) (https://podman-desktop.io) a useful GUI for working with Podman, though you will still need to do some of the work on command line.
AMD64 MACHINE
The next step is to create a machine (VM) for PODMAN based on amd64 processor. I borrowed these steps from a great IBM article. I’ll paraphrase / reuse some of their commands but the source article is here:
https://developer.ibm.com/tutorials/running-x86-64-containers-mac-silicon-m1/
Note, when you install PODMAN, you will probably already have a ARM based M1 machine (VM) created. This default machine cannot run the Oracle provided containers for database.
The machine is a VM that will host your containers in PODMAN. This is the main difference from Docker which runs a OS daemon, while PODMAN uses a VM that is technically outside of the OS it is running on.
For the purpose of running Oracle database, we will have PODMAN run a Linux amd64 VM under QEMU emulation, and then our containers will run inside of that VM (machine). Open a terminal prompt (without Rossetta) and execute the following commands:
- Verify PODMAN is version 4.1.1 or newer:
podman --version - Make sure you have stopped the current machine in PODMAN:
podman machine stop - Verify your QEMU version 7.0 or higher:
qemu-system-x86_64 --version - Create a new PODMAN machine using the latest FEDORA CORE image:
Podman machine init —image-path https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221225.3.0/x86_64/fedora-coreos-37.20221225.3.0-qemu.x86_64.qcow2.xz intel
Note: you can update the link by navigating to:
https://getfedora.org/en/coreos/download?tab=metal_virtualized&stream=stable&arch=x86_64
and grabbing the URL for DOWNLOAD from “Virtualized -> QEMU”
Note: the name “intel” will be the name of your PODMAN machine, you can change this to be whatever is useful for you.
Before starting this machine, we need to reconfigure it for the proper settings. This is critical, and I have made some suggestions to help with performance.
Edit the configuration file in your favorite text editor:
~/.config/containers/podman/machine/qemu/intel.json
Note: if you changed the machine name, then replace intel.json with the name your_name.json
Note: this is a JSON file, so watch your syntax closely. A quick JSON primer: https://guide.couchdb.org/draft/json.html
we will be updating JSON arrays and objects.
First in the “CmdLine” section, change the command that will be run:
"/opt/homebrew/bin/qemu-system-aarch64",
Change to:
"/opt/homebrew/bin/qemu-system-x86_64",
Also in the “CmdLine” section you will have to remove the following options that are for AARM:
"-accel",
"hvf",
"-accel",
"tgc",
"-cpu",
"host",
"-M",
"virt,highmem=on",
"-drive",
"file=/Users//.local/share/containers/podman/machine/qemu/intel_ovmf_vars.fd,if=pflash,format=raw"
Note: the -cpu may say something besides host, if it does remove that as well.
Add the following command line options in the spot you just removed:
"-cpu",
"qemu64",
"-accel",
"tcg,thread=multi",
"-m",
"2048",
"-smp",
"4",
This is basically saying these are the first command line options to the qemu-system-x86_64 command.
Note: I have set the memory (-m) to 2GB and the CPU core (-smp) count to 4. You can adjust these settings for your needs.
Next change the line for the VM firmware:
"file=/opt/homebrew/share/qemu/edk2-aarch64-code.fd,if=pflash,format=raw,readonly=on",
to
"file=/opt/homebrew/share/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on",
This is setting the correct firmware for the machine (VM).
Finally at the bottom of the configuration file is a object section that is used to display the machine information in the GUI. Update the CPUs and Memory settings appropriately:
“CPUs”: 4,
“Memory”: 2048,
Your machine file should now be setup correctly. Next we need to launch the machine and make sure PODMAN is healthy. From the terminal run:
podman machine start intel
Note: this will take a few minutes to start.
Verify the machine is running and healthy:
podman machine list
NAME VM TYPE CREATED LAST UP CPUS MEMORY DISK SIZE
intel* qemu 4 weeks ago Currently running 4 2.147GB 107.4GB
If you are only going to use this machine for your containers, you can set it to be the default by running from the terminal:
podman system connection default intel
Note: you need to restart the terminal for this to take affect.
Also you can change back to the original AARM container with:
podman system connection default podman-machine-default
Setting up your Oracle Database Container
I had some additional issues here. As I noted the emulated amd64 machine is not very fast, after I increased the SMP count to 4 I ran into another issue with the JAVA Oracle setup utilities that are executed during container creation. If you try to load your Oracle Database container and it is hanging, you should adjust your SMP count down to 1 (one) and restart your machine. Once the container is loaded and usable, you can shutdown the machine, reset the SMP back to 4 (four) or whatever you had. Then starting the container after this is fine.
I could verify the hang by connecting to the OS of the container:
podman exec -it DB213 sh
Seeing the java process running oracle.assistants.dbca.driver.DBConfigurator and the log file showed now progress happening (under /opt/oracle/cfgtoollogs/dbca/ORCLCDB)
Steps to get an Oracle Database Container
- Login to the website (container-registry.oracle.com) and accept the license. Sign in with the button in the upper left corner. Click on the repository (Database) then accept the license.
- From the terminal, login to the same repository with PODMAN (use same login you accepted the license with):
podman login container-registry.oracle.com - Pull the container of your choice into PODMAN (use the links on the website to select the specific version you want):
podman pull container-registry.oracle.com/database/enterprise:21.3.0.0 - List the installed images, you may want to note the image ID if you want a specific version:
podman image list - Now build a container from the image (using ID, you can also use REPOSITORY)
podman run -d -name DB213 container-registry.oracle.com/database/enterprise
Note: the DB213 is the my container name, use whatever name works for you.
Note: please see my preface on SMP CPU count if you are having issues with your container not creating a Oracle database properly.
To get a shell on the container:
podman exec -it DB213 sh
Additional tip
I was having some issues with timezone for my demo database. The containers will be UTC by default. If you want to change this you can set the parameter tz in the file:
$HOME/.config/containers/containers.conf
[containers]
# Set time zone in container. Takes IANA time zones as well as "local",
# which sets the time zone in the container to match the host machine.
#
tz = "local"
Friday, February 5, 2021
Finding Your SQL Trace File in Oracle Database
Sorry for the delay, but here are my suggestions. First you will need to know where your trace file will be located. By default it will be in the trace directory as part of the diagnostic destination. Make sure you know where your diagnostic destination is located.
SQL> show parameter diagNAME TYPE VALUE------------------------------------ ----------- ------------------------------diagnostic_dest string /u01/app/oracle
There are basically three four main ways to find your trace file.
Option 0 - Just ask
Well I learned something new, so I wanted to update this article. There is a view you can directly query to see your current trace file name.
SELECT value
FROM v$diag_info
WHERE name = 'Default Trace File';
This will show the correct trace file name when you update your TRACEFILE_IDENTIFER. So I would still recommend setting that as outlined in Option 1.
Option 1 - Manually set the trace file name
For this option, we set a session level parameter to drive the name of the trace file.
ALTER SESSION SET tracefile_identifier = 'SQLID_fua0hb5hfst77';
EXEC DBMS_SESSION.SET_SQL_TRACE(sql_trace => true);
Then disable the tracing:SELECT sum(t1.c), sum(t2.c)FROM t1, t2WHERE t1.a = t2.aAND t1.d = 10;
EXEC DBMS_SESSION.SET_SQL_TRACE(sql_trace => false);
cd /u01/app/oracle/diag/rdbms/t1db/t1db/trace/[oracle@srvr03 trace]$ ls -ltrh *fua0*-rw-r----- 1 oracle oinstall 4.3K Feb 5 08:29 t1db_ora_4978_SQLID_fua0hb5hfst77.trc-rw-r----- 1 oracle oinstall 2.0K Feb 5 08:29 t1db_ora_4978_SQLID_fua0hb5hfst77.trm
ALTER SESSION SET EVENTS '10053 trace name context forever, level 1'; -- CBO tracingALTER SESSION SET EVENTS '10046 trace name context forever, level 12'; -- Additional SQL trace informationSQL> show parameter max_dump_file_size;NAME TYPE VALUE------------------------------------ ----------- ------------------------------max_dump_file_size string unlimited
Option 2 - get your session diagnostic information
set linesize 100column name format a20column value format a70select name, value from v$diag_info where name = 'Default Trace File';NAME VALUE -------------------- ---------------------------------------------------------------------- Default Trace File /u01/app/oracle/diag/rdbms/t1db/t1db/trace/t1db_ora_31523.trc
Option 3 - Searching for the trace file
SQL> select userenv('SID') from dual;USERENV('SID') -------------- 19
find /u01/app/oracle/diag/rdbms/t1db/t1db/trace -name t1db_ora\*.trc -mtime 0 -exec grep -l "*** SESSION ID:(19." {} \;/u01/app/oracle/diag/rdbms/t1db/t1db/trace/t1db_ora_26655.trc /u01/app/oracle/diag/rdbms/t1db/t1db/trace/t1db_ora_4978_SQLID_fua0hb5hfst77_.trc /u01/app/oracle/diag/rdbms/t1db/t1db/trace/t1db_ora_31523.trc /u01/app/oracle/diag/rdbms/t1db/t1db/trace/t1db_ora_14396.trc
Friday, September 4, 2020
Oracle 19c Database Upgrade - Performance testing options
You may be able to related to this, upgrading Oracle databases can be a real drag.
Databases tend to be at the heart of critical businesses process, and you may have spent a lot of time getting things to be in a perfect working order. Upgrading your Oracle database from 11g to 19c might be a very frightening due to the risk of things changing and loss of all the work already done. This is normally where testing comes into play. What are your options for testing performance changes when moving from Oracle 11g to 19c?
There are basically three options:
- Oracle Real Application Testing or RAT
- Simulated application workload either manually or with tools like LoadRunner, NeoLoad, or a number of web specific tools
- Look at individual SQL either manually or through tools like SQL Tuning Sets in Oracle Tuning Pack
First let’s put a few rules in place when it comes to testing. For a test to be valid you need to use an environment that is as close to the target production system you will be moving to. That means if you are moving to new servers or changing hardware / storage during the upgrade; then make sure you're running tests on that new hardware or storage. Oracle 19c has many new features that take advantage of new hardware. Testing on an 8-year-old piece of hardware that was running 11g with 19c is not the same as testing 19c on a brand-new server. As the optimizer in Oracle is very aware of underlying infrastructure it will not act the same, so you are not testing the upgrade really.
Let’s give a quick description of each of these options that might take some of the stress out of upgrading your Oracle database.
Oracle Real Application Testing or RAT
Oracle makes a very specific database tool for application workload testing. Real Application Testing or RAT. Acronym aside, the tool is pretty interesting. RAT captures SQL statements from your production database, and then you can clone the database to another environment and re-run all the SQL. This is one of the most complete ways to test just the database component of your application. A couple of downsides to this tool, first it only tests the database component. If you are just upgrading the database software, change the database hardware, or only adjusting database settings; then this tool is great. If you are making application changes or updating the database schema at the same time, then this tool will not work, or at minimum will not be able to test those changes. Also, RAT is an additional cost license.
RAT will run through all the application SQL’s in order but not necessarily in the same amount of time. E.G. if you capture 8 hours of production workload it will probably take longer than 8 hours to run the replay of the workload on a similarly sized test system even if 19c is faster at individual SQL statements. Also, RAT cannot simulate everything, such as DataGuard and GoldenGate. These replication tools do cause some overhead, and RAT will not simulate that. Though most of that overhead can be very small.
High-level steps for a RAT replay:
- Capture the workload on your production system, be sure to include AWR data
- Clone the production database to test area based on the start time of your capture
- Upgrade the cloned test system to 19c and apply and database configuration changes
- Transfer the capture files from production to test, and process the capture files
- Run the database replay on test
- Load your AWR data from production and run comparison reports between production and test
The great thing about RAT is you get a very detailed report of database effort and work. You can see exactly what the 19c and possibly new hardware did differently at the database level.
The downside to RAT is you cannot change the application schema other than adding indexes or updating statistics. Overall this is a great way to test your database upgrade, and you can roll back your test database and re-run the RAT testing multiple times. This allows for iterative changes to database configuration
Simulating Application Workload
Another option is to do application testing. In this scenario you will have a full test system that include database and application tiers and a full setup of application software. The basic is idea is to setup a test environment that can be logged into and then create transactions or usage on the application.
There can be a lot of complexity with setting up another full environment with all the application components. It can also be difficult to replicate enough hardware to achieve production level performance or load. A second challenge can be the actual testing. It might be easy to have a user login to the test environment and run a few application transactions. This might give you a feel for some individual performance, but really can’t compare to having many multiple users. A system with a single user accessing it will react very differently than a system with 100’s or even 1,000’s of active users. In order to simulate those levels of workload you will need to use tolls like LoadRunner or others that will simulate end users running transactions through the application GUI or API. Creating the tests that simulate many active users can also be extremely complex and fall short of real workload patterns.
The high-level steps for application testing very dependent on your testing process and environment. You may have a test system with full technology stack and redundant application servers, load balancers, web servers, and databases. Or you may have a very simple setup just to perform basic smoke testing.
The plus side of this is that you can test both application changes and database upgrade at the same time. Though you may have a hard time during testing teasing out what might be due to an application change versus a database upgrade change. So that might extend your tuning effort work, though it can decrease the amount of testing work.
Manually running SQL
A third option is to just manually run some SQL statements in your test database. This can be done by capturing the SQL from the production system using SQL history ASH or AWR or with a SQL Tuning Sets (STS) if you are licensed. SQL can also be captured using SQL or session tracing. These statements can then be re-run in a test environment to see how they perform after the database has been modified or upgraded.
As you might already be guessing, this is very complex. Finding the statements, you want to re-run, possibly updating them, and then making sure the environment is set correct for the SQL to work in test environment can be complex. If you have no other avenue for testing, this can allow you to possible pick out specifically long running or high impact SQL’s. This is really a very minimal test, but it may show how the software changes affected the SQL performance profile.
The high-level steps for SQL tuning vary a little based on if you are using a tool like SQL Tuning Sets or manually capturing statements. Either way it looks something like this:
- Identify and capture the specific SQL in production (be sure to capture explain plans and run time statistics)
- Have a test environment that is similar to production in data volume and configuration (or clone from production)
- Upgrade the test system to 19c
- Transfer the SQL statements to test (text files. STS, etc…)
- Run the statements in test and compare explain plans and run time statistics
This can be a little bit time consuming without a tool like STS that will do most of this for you with a few steps or even through tools like Oracle Enterprise Manager (OEM).
The plus side of running individual SQL statements is the ability to look at them in-depth. You will can get very detailed performance information by running traces or SQLT explain plans. The negative side is that you won’t be able to get a multi-user view of the system.
Summary
All of this comes down to simulating the workload of the application in an upgraded database environment. The goal of this is to reduce the anxiety or risk of doing your upgrade. To be clear none of this will fully eliminate that risk, though all of this will make you more prepared for that go live date. No matter which path you choose this will give more hands-on experience with the 19c upgrade process and seeing how your application perform differently or the same on a 19c database.
Thursday, July 30, 2020
Keeping your Oracle Database Healthy
- Every patch or upgrade
- Proactively perform weekly or monthly
- Before and after a patch (compare)
- FAQ: Database Upgrade and Migration (Doc ID 1352987.1)
- Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAChk (Doc ID 2550798.1)
- Script to Collect DB Upgrade / Migrate Diagnostic Information (dbupgdiag.sql) (Doc ID 556610.1)
- hcheck.sql Script to Check for Known Problems in Oracle8i, Oracle9i, Oracle10g, Oracle 11g and Oracle 12c (Doc ID 136697.1)
- Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (DOC ID 2118136.2)
- Best Practices for running catalog, catproc and utlrpscript (Doc ID 863312.1)
- Debug and Validate Invalid Objects (Doc ID 300056.1)
- Where Can I Find the Parallel Version of Utlrp.sql? (Doc ID 230136.1)
- Master Note: Troubleshooting Oracle Data Dictionary (Doc ID 1506140.1)
- Overview of Refreshing Materialized Views (Doc ID 549874.1)
- How to Monitor the Progress of a Materialized View Refresh (MVIEW) (Doc ID 258021.1)
- Script to Check the Status or State of the JVM within the Database (Doc ID 456949.1)
Wednesday, June 17, 2020
Oracle ACFS / ASM Filter Driver and Oracle Enterprise Linux
Background References
- My Oracle Support (MOS) document “ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)”
- "Oracle Linux and Unbreakable Enterprise Kernel (UEK) Releases"By Simon Coter, SENIOR MANAGER, ORACLE LINUX AND VIRTUALIZATION PRODUCT MANAGEMENT https://blogs.oracle.com/scoter/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases
Linux Kernel Version
Support Dates | OEL Version | ||||||
Jun 2007 - Jun 2017 | OEL5 | ||||||
Feb 2011 - Mar 2021 | OEL6 | ||||||
Jul 2014 - Jul 2024 | OEL7 | ||||||
Jul 2019 - Jul 2029 | OEL8 | ||||||
UEK YUM Channel | UEKR1 | UEKR2 | UEKR3 | UEKR4 | UEKR5 | UEKR6 | |
Major UEK Version | 2.6.32 | 2.6.39 | 3.8.13 | 4.1.12 | 4.14.35 | 5.4.17 | |
Released | Mar, 2012 | Oct, 2013 | Jan, 2016 | Jun, 2018 | Mar, 2020 |
ACFS / AFD Linux Kernel Compatibility
UEK YUM Channel | UEKR1 | UEKR2 | UEKR3 | UEKR4 | UEKR5 | UEKR6 | |
Major UEK Version | 2.6.32 | 2.6.39 | 3.8.13 | 4.1.12 | 4.14.35 | 5.4.17 | |
DB Version | Release Date | Mar, 2012 | Oct, 2013 | Jan, 2016 | Jun, 2018 | Mar, 2020 | |
11.2.0.3 | OEL5 - OEL6 | 2.6.32-100 | 2.6.39-100 | ||||
11.2.0.4 | OEL5 - OEL6 | 2.6.32-100 | 2.6.39-100 | 3.8.13-118 | 4.1.12-112.16.4 | ||
12.1.0.1 | OEL5 - OEL6 | 2.6.32-100 | 2.6.39-100 | ||||
12.1.0.2 | OEL6 - OEL7 | 2.6.39-100 | 3.8.13-118 | ||||
12.2.0.1 | OEL6 - OEL7 | 2.6.39-100 | 3.8.13-118 | 4.1.12-112.16.4 | 4.14.35-1902 | ||
18c | OEL6 - OEL7 | 2.6.39-100 | 3.8.13-118 | 4.1.12-112.16.4 | 4.14.35-1902 | TBD | |
19c | OEL7 | 4.1.12-112.16.4 | 4.14.35-1902 | TBD |
$ acfsdriverstate supported -v
ACFS-9459: ADVM/ACFS is not supported on this OS version: '4.14.35-1902.10.8.el7uek.x86_64'
ACFS-9201: Not Supported
ACFS-9553: Operating System: linux
ACFS-9554: Machine Architecture: x86_64-linux-thread-multi
ACFS-9555: Operating system name and information: Linux srvr03 4.14.35-1902.10.8.el7uek.x86_64 #2 SMP Thu Feb 6 11:02:28 PST 2020 x86_64 x86_64 x86_64 GNU/Linux
ACFS-9556: Release package: oraclelinux-release-7.7-1.0.5.el7.x86_64
ACFS-9557: Version: ADVM/ACFS is not supported on 4.14.35-1902.10.8.el7uek.x86_64
ACFS-9558: Variable _ORA_USM_NOT_SUPPORTED is defined: no
ASM Required Patches
DB Version | OS Version | UEK Channel | UEK Minimum | Base bug or Base version |
11.2.0.3.6 | OEL5 - OEL6 | UEKR1 | 2.6.32-100 | 15986571 |
11.2.0.3.7 | OEL5 - OEL6 | UEKR1 | 2.6.32-100 | 12983005 |
11.2.0.4 | OEL5 | UEKR1 | 2.6.32-100 | Base |
11.2.0.4 | OEL5 - OEL6 | UEKR2 | 2.6.39-100 | Base |
11.2.0.4.4 | OEL6 | UEKR3 | 3.8.13-118 | 16318126 |
11.2.0.4.6 | OEL7 | UEKR3 | 3.8.13-35 | 18321597 |
11.2.0.4.180717 | OEL6 - OEL7 | UEKR4 | 4.1.12-112.16.4 | 22810422, 28171094, 27463879 |
12.1.0.1 | OEL5 - OEL6 | UEKR1 | 2.6.32-100 | Base |
12.1.0.1 | OEL5 - OEL6 | UEKR2 | 2.6.39-100 | Base |
12.1.0.2.170718 | OEL5 - OEL6 | UEKR2 | 2.6.39-100 | Base |
12.1.0.2.170718 | OEL6 | UEKR3 | 3.8.13-13 | Base |
12.1.0.2.170718 | OEL7 | UEKR3 | 3.8.13-35 | 18321597 |
12.1.0.2.181016 | OEL6 - OEL7 | UEKR4 | 4.1.12-112.16.4 | 22810422, 27942938, 28171094, 27942938 |
12.1.0.2.190716 | OEL7 | UEKR5 | 4.14.35-1902 | 27494830 |
12.2.0.1 | OEL6 | UEKR2 | 2.6.39-100 | Base |
12.2.0.1 | OEL6 | UEKR3 | 3.8.13-13 | Base |
12.2.0.1.180717 | OEL6 | UEKR4 | 4.1.12-112.16.4 | 27463879, 28171094 |
12.2.0.1 | OEL7 | UEKR3 | 3.8.13-35 | Base |
12.2.0.1.180717 | OEL7 | UEKR4 | 4.1.12-112.16.4 | 27463879, 28171094, 27463879 |
12.2.0.181016 | OEL7 | UEKR5 | 4.14.35-1902 | 28069955 |
18.3.0.0 | OEL6 | UEKR2 | 2.6.39-100 | Base |
18.3.0.0 | OEL6 | UEKR3 | 3.8.13-13 | Base |
18.4.0.0.181016 | OEL6 | UEKR4 | 4.1.12-112.16.4 | 28069955 |
18.3.0.0 | OEL7 | UEKR3 | 3.8.13-35 | Base |
18.4.0.0.181016 | OEL7 | UEKR4 | 4.1.12-112.16.4 | 27463879, 28171094, 27463879 |
18.4.0.0.181016 | OEL7 | UEKR5 | 4.14.35-1902 | 27494830 |
19.3.0.0 | OEL7 | UEKR4 | 4.1.12-112.16.4 | Base |
19.4.190716 | OEL7 | UEKR5 | 4.14.35-1902 | 27494830 |
Patch 31055785 : applied on Fri May 22 14:15:34 CDT 2020 Unique Patch ID: 23448796 Patch description: "ACFS Interim patch for 31055785" Created on 20 Mar 2020, 03:53:18 hrs PST8PDT Bugs fixed: 28531803, 30685278, 27494830, 27917085, 28064731, 28293236, 28321248 28375150, 28553487, 28611527, 28687713, 28701011, 28740425, 28818513 28825772, 28844788, 28855761, 28860451, 28900212, 28951588, 28960047 28995524, 29001307, 29030106, 29031452, 29039918, 29115917, 29116692 29127489, 29127491, 29167352, 29173957, 29198743, 29229120, 29234059 29250565, 29264772, 29302070, 29313039, 29318169, 29338628, 29339084 29350729, 29363565, 29379932, 29391373, 29411007, 29417321, 29428155 29437701, 29482354, 29484738, 29520544, 29524859, 29527221, 29551699 29560075, 29586338, 29590813, 29604082, 29643818, 29704429, 29705711 29721120, 29760083, 29779338, 29791186, 29848987, 29851205, 29862693 29872187, 29893148, 29929003, 29929061, 29937236, 29941227, 29963428 30003321, 30032562, 30046061, 30051637, 30057972, 30076951, 30093449 30140896, 30179531, 30239575, 30251503, 30264950, 30269395, 30275174 30324590, 30363621, 30655657Which you can see includes the required patch 27494830 (first row, 3rd column). Depending on your version of Oracle, and quarterly bundle patch there are many options of patches available. The quarterly bundle patches for GI generally contain an updated ACFS patch as well.
Details of the ASM Kernel Modules
$ ls $GRID_HOME/usm/install/Oracle/EL7UEK/x86_64 4.1.12/ 4.1.12-112.16.4/ 4.14.35-1902/
amcmd afd_configure -e
” is used to install the AFD kernel
modules. The AFD module use the Kernel Application Binary Interface
and are installed under the /lib/modules
location on your
Linux system. As they are considered weak modules, they are placed
in one location and then symbolically linked into each kernel specific
directory when the kernel is updated.
[root@srvr03 sbin]# ls -l /lib/modules/4.14.35-1902*/{extra,weak-updates}/oracle /lib/modules/4.14.35-1902.0.9.el7uek.x86_64/extra/oracle: total 9824 -rw-r--r-- 1 root root 10055682 Jun 5 13:16 oracleafd.ko /lib/modules/4.14.35-1902.10.8.el7uek.x86_64/weak-updates/oracle: total 0 lrwxrwxrwx 1 root root 69 Jun 5 13:17 oracleafd.ko -> /lib/modules/4.14.35-1902.0.9.el7uek.x86_64/extra/oracle/oracleafd.ko /lib/modules/4.14.35-1902.11.3.1.el7uek.x86_64/weak-updates/oracle: total 0 lrwxrwxrwx 1 root root 69 Jun 5 13:44 oracleafd.ko -> /lib/modules/4.14.35-1902.0.9.el7uek.x86_64/extra/oracle/oracleafd.ko
[root@srvr03 init.d]# chkconfig --list afd Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. afd 0:off 1:off 2:off 3:on 4:off 5:on 6:off
Here you can see that the AFD module will attempt to be loaded at run levels 3 and 5. You can also check what version of the AFD module you have loaded with the following Linux command:
[oracle@srvr03 ~]$ modinfo oracleafd filename: /lib/modules/4.14.35-1902.11.3.1.el7uek.x86_64/weak-updates/oracle/oracleafd.ko license: Oracle Corporation description: ASM Filter Driver author: Oracle Corporation srcversion: 533BB7E5866E52F63B9ACCB depends: retpoline: Y name: oracleafd vermagic: 4.14.35-1902.0.9.el7uek.x86_64 SMP mod_unload modversions signat: PKCS#7 signer: sig_key: sig_hashalgo: md4 parm: oracleafd_use_logical_block_size:Non-zero value implies using logical sectorsize (int) parm: oracleafd_use_maxio_softlimits:Non-zero value implies using conservative maximum block io limits (int)
Using the lsmod
command to see if it is loaded. If this
command returns no lines, then it is not loaded.
[oracle@srvr03 ~]$ lsmod |grep oracleafd oracleafd 229376 0
Conclusion
Ok, so hopefully this has given you a clear picture as to what you need to plan for when using ASM Filter Driver (AFD) or ACFS on your OEL system with UEK. Once you have a base version of kernel and GI you should have little issues unless you jump UEK channels or do a GI major version change. As many people are eyeing up 19c upgrades, hopefully the above information is helpful to prepare your Linux OS for the change. For myself what started as a somewhat simple post about kernel drivers turned into a pretty long delve into the workings of the ASM filter driver. Please leave your comments and questions.