Friday, May 22, 2015

Oracle Unified Directory 11gR2 PS3 Is Out

Oracle Unified Directory (OUD) 11g Release 2 Patch Set 3 (11.1.2.3) is now available for download through Oracle's cloud software delivery service (eDelivery) or Oracle Technology Network (OTN).

I'm really excited to share about the new features being introduced with this new patch set.  Before getting to those features though, it is important to note that since Oracle acquired Sun Microsystems back in 2010, the product management and engineering teams have continued quietly working on many strategic long term investments in directory services.  Much of what has been revealed thus far has focused on OUD's strategic role in the Middleware portion of the Oracle stack.   This includes both ensuring OUD has been pre-qualified to work with Oracle's software portfolio as well as those products being certified to work with OUD.   Examples include:
  • Native and pre-qualified support by all of Oracle’s Identity Governance, Access Management,  Mobile Security product suites, Fusion Applications and other products within the Oracle Directory Services Plus suite
  • Native support of Enterprise User Security for centralizing Oracle database authentication and authorization
  • LDAP virtualization to backend data sources such as AD and LDAPv3 and attribute transformations for centralizing access to multiple data sources
  • Native real time bi-directional replication with ODSEE 11g
  • Synchronization between OUD and other data sources such as AD, LDAPv3 and RDBMS
  • Support for Oracle's Execution Context ID tagging for end-to-end transaction auditing across Oracle products
  • Comprehensive monitoring through Oracle Enterprise Manager’s monitoring system
    • Monitor for availability and performance
    • Collect monitoring metrics for capacity planning and comprehensive view of usage
    • Alert on incidents and metric thresholds
    • Correlate events
    • Run pre-defined commands remotely to stop, start or restart services
    • Rollup metrics into abstracted levels like data centers
    • Use corrective actions to streamline incident management
    • Provide Service Level Agreement (SLA) reporting against Service Level Objectives
Of course OUD has continued to be infused with technical improvements as well.   Some examples include:
  • Radically simplified replication configuration compared to all previous generation's of Sun's directory services.
  • Dramatically improved replication performance and scalability
  • Extremely simple elastic expansion and contraction of a OUD replication topology for any architecture whether in the enterprise or in the cloud
  • Improved overall performance and scalability
  • Easy to use service configuration through the Oracle Directory Services Manager
  • Greatly simplified tuning through dstune
  • Attribute encryption
  • Entry compaction
  • New plugin API for writing your own custom plugins
But all of that improvement is not why your reading this post.  No, its about the improvements  introduced by OUD 11g Release 2 Patch Set 3 (11.1.2.3).  As I said before, this patch set like previous releases reveals just some of the ongoing investments that Oracle continues to make in Oracle's directory services portfolio.  With that, here are some of the new features introduced with this patch set:

Enhanced Security
  • Attribute masking in audit log
  • Password expiration virtual attribute
  • Password policies with ability to select 3 out of 4 character sets
  • Added certificate management commands, support for HSM integration
  • Enhancement to Linux crypt algorithm
Simplified Deployments
  • Bi-directional replication with Sun DSEE 6.3
  • OUD key metrics added to ODSM like ODSCC console
  • Non-intrusive and password filter methods of password synchronization with Active Directory
  • Out of the box optimization with auto-adaptive JVM tuning
New Virtualization Use Cases
  • Join configuration added to ODSM
  • RDBMS workflow element added to command line
  • Plug-in for storing data source password updates
  • Hide entry by filter workflow element
  • GetRidOfDuplicate filter workflow element
  • MemberOf virtual attribute
  • New previous-last-login-time attribute
Scalability and Performance
  • Support for very large static groups up to millions of members
  • Reduce memory footprint with selective attribute caching and attribute tokenization
Download OUD 11g R2 PS3 today from eDelivery or OTN and try it out for yourself.  The documentation set for OUD 11g R2 PS3 is available here.  The full updated identity management documentation set is available here.

Enjoy!

Brad

Tuesday, January 27, 2015

PAMLDAP: Provisioning UNIX Accounts In AD

Leveraging Active Directory (AD) user and group data to provide UNIX authentication and authorization is common these days through authentication frameworks such as PAM LDAP, SSSD, ... etc.

A frequent related topic asked by customers is how to provision and update UNIX (posixAccount schema) attributes such as uid, uidNumber, and gidNumber of the AD users and groups through existing AD tools.

Microsoft offers management of the required UNIX attributes through the "Network Information Service" component of the Identity Management for Unix Role Service.

This blog post walks you through how to enable UNIX user and group provisioning in Microsoft Server 2008R2.

1. The first step is to add the Role Service.
     a. Click Start --> Administrative Tools --> Server Manager
     b. Right click on Server Manager --> Roles --> Active Directory Domain Services and select Add Role Services

     c. Select "Server for Network Information Services"

     d. Complete the the wizard workflow through restarting the server

2. Index the UNIX attributes to ensure optimal AD performance when using UNIX attributes.
     a. Register the Schema Management Snap-In by clicking Start --> Type regsvr32 schmmgmt --> Press Enter --> Click OK


     b. Add AD Schema snap-in by clicking Start --> Type mmc /a --> Press Enter


     c. Click File --> Click Add/Remove snap-in...


     d. Click Active Directory Schema --> Click Add --> Click OK


     e. Click on Attributes --> Scroll down to gidNumber --> Right Click on uid --> Click Properties --> Check the "Index this attribute" checkbox --> Click OK


     f. Repeat the previous step for uidNumber and gidNumber
     g. Close and save changes to Console1

3. Provision UNIX users and groups or update existing ones
     UNIX Groups:
     a. Click Start --> Type User --> Click Active Directory Users and Computers
     b. Select an existing group or add a new one
     c. Click on UNIX Attributes tab

     d. Select NIS Domain (example in my case)
     e. If appropriate, update Group ID (GID)
     f. Click OK
     g. Repeat a-f for all UNIX groups

     UNIX Users:
     a. Click Start --> Type User --> Click Active Directory Users and Computers
     b. Select an existing user or add a new one
     c. Click on UNIX Attributes tab

     d. Select NIS Domain (example in my case)
     e. If appropriate, update User ID (UID), Login Shell, ...
     f. Select Primary group
     g. Click OK
     h. Repeat a-g for all UNIX users

That's it.  Enjoy!

Brad

Monday, April 7, 2014

Adding Thunderbolt Bridged Network to VMWare Fusion 6

Thunderbolt Bridged Network is a very fast and high volume network fabric.  I like to use this for syncing data between two Mac computers.  During high volume sync's, I typically get between 300-500MB/s.

Another favorite thing to do is enable multiple Virtual Machines's (VM's) to communicate over the Thunderbolt network fabric.  However, this isn't yet supported out of the box with VMWare Fusion yet.  Fortunately, that never stopped a persistent geek from finding a way to make it work.

Thanks to MaZePallasfor their solution.  I borrowed from their work to come up with the following solution for my environment and needs.


1. Create a private bridged network using static IP addresses between the two Mac's over the Thunderbolt cable.  In my case, they network is 192.168.3.0/24 where one host was on 192.168.3.2 and the other was on 192.168.3.3.

2. Next, use MaZePallas's recommendation to add a VMWare virtual network (vmnet2) to the VMWare configuration.  I used the following sequence of commands for this purpose:

sudo /bin/bash
# cd /Applications/VMware\ Fusion.app/Contents/Library
# ./vmnet-cfgcli vnetcfgadd VNET_2_DHCP no
# ./vmnet-cfgcli vnetcfgadd VNET_2_HOSTONLY_SUBNET 192.168.3.0
# ./vmnet-cfgcli vnetcfgadd VNET_2_HOSTONLY_NETMASK 255.255.255.0
# ./vmnet-cfgcli vnetcfgadd VNET_2_VIRTUAL_ADAPTER yes
# ./vmnet-cli --configure
# ./vmnet-cli --stop
# ./vmnet-cli --start
3. Manually add vmnet2 to your vmx configuration file:
vi .vmwarevm/.vmx:
...
ethernet1.present = "TRUE"
ethernet1.connectionType = "custom"
ethernet1.virtualDev = "e1000"
ethernet1.wakeOnPcktRcv = "FALSE"
ethernet1.addressType = "generated"
ethernet1.vnet = "vmnet2"
ethernet1.addressType = "static"
ethernet1.linkStatePropagation.enable = "FALSE"
ethernet1.wakeOnPcktRcv = "FALSE"

4. Apply the following patch to VMWare's services.sh script: 
--- /Applications/VMware Fusion.app/Contents/Library/services.sh_
+++ /Applications/VMware Fusion.app/Contents/Library/services.sh
@@ -661,6 +661,10 @@
    if retString=`"$LIBDIR/vmnet-cli" --start`; then
       echo "Started network services"
+      ifconfig vmnet2 down
+      ifconfig vmnet2 inet delete
+      ifconfig bridge0 addm vmnet2
+      ifconfig vmnet2 up
    else
       logger -s -t "VMware Fusion 1398658" \
          "Error: Unable to start the network services. Error: $retString [$?]"
@@ -682,6 +686,9 @@

--stop)
    "$LIBDIR/vmware-usbarbitrator" --kill || true
+   ifconfig vmnet2 down
+   ifconfig bridge0 deletem vmnet2
+   ifconfig vmnet2 up
    "$LIBDIR/vmnet-cli" --stop

    vmware_stop_pidfile /var/run/vmnet-bridge.pid

5. Shutdown and restart the Mac because VMWare doesn't seem to recognize the network fully otherwise.
6. Add a second network adapter to each of the Virtual Machines (VMs) that uses the vmnet2 custom and private network.

7. Once the VM virtual hardware has been updated, then just configure the IP address of the new adapter within the guest operating system.  In my case, I reserved 192.168.3.5-20 for the VMs.  Note that if you copied a VM from one host to the other that you change the MAC address(es) of the VM on the other host before starting it up.  Otherwise, you end up with network collisions for the same MAC address.

Hope that helps!

Brad

Friday, January 31, 2014

Mapping Ports To Processes...

Mapping Ports To Processes

One forensic task that I do so infrequently that I have to look it up each time is determine what process is listening on a port.  Therefore, I am finally capturing my favorite methods in this blog post.

Linux Options

For Linux, the following netstat command is the most succinct method:

# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:4000                0.0.0.0:*                   LISTEN      12668/thnuclnt      

Alternatively, you can also use lsof if it is installed on your OS:

# fuser 4000/tcp
4000/tcp:            12668

To learn more about the process id (pid), look in the proc table with:
# ls -al /proc/12668/exe
lrwxrwxrwx 1 oracle oracle 0 Jan 30 18:47 /proc/12668/exe -> /usr/lib/vmware/bin/appLoader

Solaris Options

For Solaris, 10 and beyond you can use pfiles with:

# pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'

To list all port to pid mappings on a system:

for p in `ls /proc`;do a='';P=$(pfiles $p|grep 'port: '|sed -e "s/^.*port: //g");for pt in  $P;do if [ "$pt" -
gt 0 ];then a=$(pargs $p|grep -v argv);echo "Port $pt --> Pid $a";fi;done;done

To find all processes associated with a specific port:

# port=22;for p in `ls /proc`;do a='';P=$(pfiles $p|grep 'port: '|sed -e "s/^.*port: //g");for pt in  $P;do if
[ "$pt" -eq $port ];then a=$(pargs $p|grep -v argv);echo "Port $pt --> Pid $a";fi;done;done
Port 22 --> Pid 2049:   /usr/lib/ssh/sshd
Port 22 --> Pid 2129:   /usr/lib/ssh/sshd
Port 22 --> Pid 217:    /usr/local/sbin/sshd -f /usr/local/etc/sshd_config -R
Port 22 --> Pid 223:    /usr/local/sbin/sshd -f /usr/local/etc/sshd_config -R
Port 22 --> Pid 2698:   /usr/local/sbin/sshd -f /usr/local/etc/sshd_config
Port 22 --> Pid 2698:   /usr/local/sbin/sshd -f /usr/local/etc/sshd_config
Port 22 --> Pid 419:    /usr/lib/ssh/sshd
Port 22 --> Pid 461:    /usr/lib/ssh/sshd
Port 22 --> Pid 462:    /usr/lib/ssh/sshd
pfiles: cannot examine 9499: no such process

If available you can use lsof.  For example:

# lsof -f TCP:port

or

# lsof -i:portnumber

Great References:
* http://www.cyberciti.biz/faq/what-process-has-open-linux-port/
* http://stackoverflow.com/questions/91169/what-process-is-listening-on-a-certain-port-on-solaris

Friday, July 20, 2012

Web, REST, SOAP, LDAP, oh my!

This week, I was in Santa Clara for a preview of Oracle's recently announced 11g R2 version of the Oracle Identity and Access Management platform.  I was very impressed by the innovation that Oracle invested into this release.

Oracle's access management layered on top of Oracle's directory services is a powerful combination that enables high performance single sign-on authentication and authorization for mobile applications (e.g. iOS and Android apps), web services, applications and even desktop applications.

Customer's concerned with protecting their digital assets such as identity data, intellectual property, and core data will be very interested in this new version.  For example, one of the most recent emerging threats to companies is the BYOD revolution (or epidemic depending on your point of view).  With the 11g R2 release, they will now for the first time have a comprehensive access management solution for protecting these assets regardless of the source of the end point device.

For example, with 11g R2, an employee can securely login via single-signon (SSO) from his iPhone, iPad or Android device to the companies various web sites and apps (e.g. CRM, phone book, expense reporting, ...) and flip between them without having to login to each one individually.  But then, imagine that just a few minutes later, the same iPhone attempts to access one of these apps from an entirely different location because the iPhone was stolen.  Adaptive Access detects the contextual change through it's context based risk scoring analysis and issues a challenge question before permitting the end user to use the App.  If the thief cannot correctly answer the security question(s), then access to all of the corporate apps and web services could be suspended from that device.  That is powerful!

This example can be extended further by looking at it from the perspective of someone attempting to login to one of the company's web services using valid (but stolen) privileged credentials via Web, REST, SOAP or other web service oriented protocols with nafarious intent.  Contextual elements such as location, browser type and version, time of day, network address and many others would be used by Adaptive Access to determine if this really is who the user says they are.  If any of these contextual elements are outside of the norm for the user then the risk scoring engine would challenge the user to answer security question(s) or perhaps just block access altogether.

As the mobile market momentum continues to build, I expect that interaction with identity data through ever expanding protocols such as Web, REST, SOAP, and LDAP is going to grow exponentially over time.  This implies that you need to ensure that your access management and identity infrastructure will need to scale to meet the challenge but also to do it as securely as possible.   Leveraging Oracle's 11g R2 access and identity management enables you not only to leverage identity data through these and other emerging protocols but it enables you to do so very securely.

Lastly, I have only mentioned a few of the features that 11g R2 represents.  There are many other great things like unified coarse and fine grain policy management for all of your web service, app and desktop interactions.  Read the announcement and then reach out to your local Oracle sales representative to learn more.

Brad
p.s. Disclaimer: I am an Oracle employee but one that is pumped about this new opportunity to help customers grow their business in a secure and scalable manner.

Monday, January 23, 2012

Installing VM Tools in OEL

I recently setup an Oracle Enterprise Linux 5 virtual machine in a VMware.  Unfortunately, the VMware tools wouldn't install cleanly.  While researching this issue, I found that some people encountered the issue with  both OEL5 and OEL6.  However, I haven't tested it with OEL6 yet.

Below is what I had to do to resolve the issue.

1. Install the latest update of Oracle Enterprise Linux 5.
2. Download and enable Oracle's public yum service.
      wget -qO - http://public-yum.oracle.com/public-yum-el5.repo|sed \
       -e "s/enabled=0/enabled=1/g" > /etc/yum.repos.d/public-yum-el5.repo

3. Install the requisite packages to compile the VMware tools
      yum install -y kernel-uek-headers-`uname -r` gcc kernel-uek-devel
4. Extract the VMWare tools
      tar --gunzip -xf /media/VMware\ Tools/VMwareTools-*.tar.gz
5. Attempt to compile the VMware tools
      cd vmware-tools-distrib/bin
      ./vmware-install.pl
           Agree to all of the default settings except the display settings.  
           For that, I selected the number that corresponds to 1024x768.  
           This was just my preference.

If you happen to get an error similar to "No module ehci-hcd found for kernel", then you will need to append the $content variable to build the had modules by editing the bin/vmware-config-tools.pl and add the red line in the following excerpt.
    foreach my $key (@gRamdiskModules) {
      if ($style eq 'redhat') {
        $content .= " --with=" . get_module_name($key) . "  ";
        $content .= " --builtin=ehci-hcd --builtin=ohci-hcd \
           --builtin=uhci-hcd ";
      } else {
        $content .=  get_module_name($key) . ' ';
      }
    }

Once that fix is applied, re-run ./vmware-install.pl and then restart the VM to enable copy/paste.

Saturday, January 7, 2012

Family & Friends Backup Plan

Hello,

This is a reminder to all of my friends and family to stop what you are doing and backup your computer NOW!  Seriously!  Go! NOW!!!!

O.K., I'm done with that soap box.

I have done several full Microsoft Windows PC/Laptop recoveries over the past few weeks.  So far I have been able to backup everyone's data, reinstall Windows, Scan and Smash the Viri/Malware and safely restore the data to the repaired system.  However, there was one close call that I wasn't sure recovery would be possible because of a failing disk drive.  Fortunately it worked out for the data that mattered most... family pictures, iTunes data, and misc docs.  In each of these cases, NONE had a current full backup of their data.

Nearly all of these recoveries were necessitated as the result of one wrong click of an infected e-mail, text message, or browser link that unleashed some terrible virus or malware.  Normally at this point I would gently bash Microsoft Windows for its ease with which it is infected with all sorts of malware but I will not digress this time.  Instead I will return to the topic of this post, ... backups.

If you don't have a current backup of all your important data or don't know how to backup your data, this blog is for you.  I am going to share with you a simple 10 step program to backup your data and ensure that it stays backed up.

Step 1. Determine the sum of all data from all computers that need to be backed up.  Lets say you have a MacBook Pro with 200GB of data and a Windows Desktop PC with 400GB of data.  The total data for these two computers is 600GB.  The following steps can help you determine how much storage a given computer is using.

  • For a Microsoft Windows computer, right click on the start button, click on Explorer, then left click on each hard drive (C:, E:, ...) and click on properties.  This should show the size of the disk drive and how much is use by data.
  • For a Mac, click on Finder, then left click (or Ctrl-Click) and select Get Info on each of the disk drives starting with Macintosh HD.  Sum the Capacities of all the disk drives to back up.

Step 2.  Buy an external hard disk drive or storage array that is large enough to hold two or three times the capacity determined from Step 1 above.  There are several 1TB, 2TB, and even 3TB disk drives available for under $200.  I usually get the best deals on storage either through NewEgg.com or some really good deal at Fry's.  The advantage of NewEgg.com is that they usually offer a really good price plus free shipping and no tax.
Step 3. Attach the storage to a desktop computer that you can leave on all the time for network backups.  If you don't have a computer for this purpose, go buy an inexpensive desktop from Dell, Best Buy, Fry's, ... etc.  You should be able to find a sufficient desktop system for under $500.  The primary purpose of this system is to provide a safe destination for your computer backups.
Step 4. Download and Install the appropriate version of CrashPlan from CrashPlan.com for your desktop computer that has the storage attached to it.  CrashPlan is FREE when you are backing up to your own local storage or local computers.
Step 5. Sign up for a CrashPlan account making note of the e-mail address and password used for the CrashPlan account.
Step 6. Select what to backup with the following steps

  1. Click on "Backup" from the left hand menu
  2. Click on "Change..." under Files to select what is to be backed up.
  3. By default, CrashPlan selects the home directory of the user installing CrashPlan.  If there are other users on that computer, you will want to check their home directories as well.  Be sure to browse around and select all drives that may contain important data.  When in doubt, back it all up.
  4. Click on Save to save your backup selections.

Step 7. Make sure that the attached storage is formatted and rename the drive to "CPBackups".
Step 8. Setup CrashPlan to use the attached storage with the following steps.

  1. Start the CrashPlan app
  2. Click on Destinations
  3. Click on Folders
  4. Click on "Select..." 
  5. Select the "CPBackups" drive
  6. Click on "Start Backup" 

Step 9. Now its time to install the rest of the computers and configure them to backup their data over the network to the desktop computer with the external storage attached.  Do the following steps on each computer.

  1. Download CrashPlan, install it, and login with your credentials from Step 5.
  2. Click on Destinations
  3. Click on Computers
  4. Select the desktop computer that is running CrashPlan
  5. Click on "Start Backup"

Step 10. Periodically check the health of your desktop computer to ensure that the external storage has not started giving any errors.

Lastly, consider switching to an Apple computer the next time that you are ready to make a computer purchasing decision.  I don't want to belabor this point but about 6 months after you've made the switch you will wonder why you hadn't made the switch much sooner.  I'm not saying that Apple's aren't any less susceptible to viri or vulnerabilities.  Their track record though has been 10,000 times better than Microsoft Windows.  Of all the computers that I have recoverd from malware infection over the past 10 years, NONE of them have been Apple computers.

Blessings to you and yours!

Brad