Technical: Disk – IO – Performance – Disk Benchmark – Tools – DiskMark

Introduction

Have a couple of internal disks and network SAN Storage and wanted to know how they compare.

So looked on the Net for free Disk IO Benchmark and profiling tools.

Tools

• Iometer
• Atto
• CrystalDiskMark

Tried a couple of the tools, but the one I settled on is DiskMark from NetworkDLS.

Install the tool

Usage

Once the tool is installed, please initiate it.

There are 4 major areas you want to touch on.

Disk Drive

All the disks on the system are availed in the “Disk Drive” drop-down box.

Set Size

This is the payload unit size.  It is represented in bytes.

You are able to use simple math when entering your value.

 Size Values 1 K 1024 64 KB 64 * 1024 1 MB (1000 KB) 1000 * 1024 1 GB (1000 * 1000 KB) 1000 * 1000 * 1024

Rounds

This is the number of times to process each payload.

Runs

This value relates to repetition cycle.  For each cycle values are gathered.  And, at the completion of all the runs, averages are calculated.

Sample Result

Trial Run:

Here is the configuration of our Trial Run:

Analyze Result

Analyze result.

 Disk Disk Time Write Performance Read Performance C 274.79 9.34 MB/s 128.32 MB/s D 272.47 9.42 MB/s 125.60 MB/s X 22.74 115.99 MB/s 287.97 MB/s Y 25.56 101.97 MB/s 254.61 MB/s Z 25.24 106.47 MB/s 277.02 MB/s

Conclusion

From using the Open Source Tool, DiskMark, we are quickly able to determine that our internal drives (C: & D:) will only deliver 10 MB/sec when writing data.

Whereas the SAN Storage drives will deliver an average of about 105 MB.

The numbers are not break-point numbers as the tool might probably not be writing data in parallel, using read-ahead cache, etc.

The number are simply guidance numbers while comparing Apple to Apple, so to speak.

References

References – DiskMark

http://lifehacker.com/5824265/diskmark-is-a-free-and-easy-hard-drive-benchmark-tool

References – winsat

http://blog.dv411.com/2011/01/disk-test-quickie-windows.html

Staying Centered and Alive

It has being a bit of a melachonic couple of weeks for me.

Like they say, you ‘re rarely the only one:

Frank X. Shaw, Corporate Vice President of Corporate Communications at Microsoft:

http://blogs.technet.com/b/microsoft_blog/archive/2013/05/10/staying-centered.aspx

There are many advantages to living in a world that is mostly connected. Feedback is immediate. Weak signals are easily amplified. Voices can be heard.

Of course, every benefit has a drawback.

In this world where everyone is a publisher, there is a trend to the extreme – where those who want to stand out opt for sensationalism and hyperbole over nuanced analysis.

In this world where page views are currency, heat is often more valued than light.

Stark black-and-white caricatures are sometimes more valued than shades-of-gray reality.

It’s been a week like that, from a couple of unlikely sources.

So let’s pause for a moment and consider the center.

Though dated, here is Drake’s take.  It is the opening track on his sophomoric Album:

Drake – Over my Dead Body

But, you know who really stayed alive and well, are those 3 girls and their familia in Cleveland.

Leaning on Drake:

Those girls wear crowns…

We all like to be heard

… as it is the spaces between Living and the lived.

Technical: Hadoop/Cloudera (v4.2.1) – Installation on CentOS (32 bit [x32])

Introduction

Here is quick preparation, processing, and validation steps for installing Cloudera – Hadoop (v4.2.1) on 32-bit CentOS.

Blueprint

I am using Cloudera’s fine documentation “http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Quick-Start/cdh4qs_topic_3_2.html” as a basis.

It is a very good documentation, but I stumble a lot for lack of education and glossing over important details.  And, so I chose to write things down.

Environment Constraints

Here is the constraint that I have to work with:

• My lab PC is an old Dell
• It is a 32-bit processor
• And, so I can only install a 32-bit Linux & Cloudera Distro

Concepts

Here are a couple of concepts that we will utilize:

• File System – Linux – Stickiness

Concepts : File System – Linux – Stickiness

Background

Sticky Bit

http://en.wikipedia.org/wiki/Sticky_bit

The most common use of the sticky bit today is on directories. When the sticky bit is set, only the item’s owner, the directory’s owner, or the superuser can rename or delete files. Without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of owner. Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users’ files.

In Unix symbolic file system permission notation, the sticky bit is represented by the letter t in the final character-place.

Set Stickiness

http://en.wikipedia.org/wiki/Sticky_bit

The sticky bit can be set using the chmod command and can be set using its octal mode 1000 or by its symbol t (s is already used by the setuid bit). For example, to add the bit on the directory/tmp, one would type chmod +t /tmp. Or, to make sure that directory has standard tmp permissions, one could also type chmod 1777 /tmp.

To clear it, use chmod -t /tmp or chmod 0777 /tmp (using numeric mode will also change directory tmp to standard permissions).

Is Stickiness set?

http://en.wikipedia.org/wiki/Sticky_bit

In Unix symbolic file system permissions notation, the sticky bit is represented by the letter t in the final character-place. For instance, in our Linux Environment , the /tmp directory, which by default has the sticky-bit set, shows up as:

Sample:

for service in /etc/init.d/hadoop*; do echo $service; done  Screen Shot: Starting CHD Services http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_27_1.html  Component Command Log hadoop-hdfs-namenode sudo/sbin/service hadoop-hdfs-namenode start /var/log/hadoop-hdfs/hadoop-hdfs-namenode-.log/var/log/hadoop-hdfs/hadoop-hdfs-namenode-.out hadoop-hdfs-secondarynamenode sudo /sbin/service hadoop-hdfs-namenode start /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-.out hadoop-hdfs-datanode sudo /sbin/service hadoop-hdfs-datanode start /var/log/hadoop-hdfs/hadoop-hdfs-datanode-.out hadoop-0.20-mapreduce-jobtracker sudo /sbin/service hadoop-0.20-mapreduce-jobtracker start /var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-.out hadoop-0.20-mapreduce-tasktracker sudo /sbin/service hadoop-0.20-mapreduce-tasktracker start /var/log/hadoop-0.20-mapreduce/hadoop-hadoop-tasktracker-.out Start Services – Using /etc/init.d Look for items in the /etc/init.d/ folders that have hadoop in their names and start them.  Syntax: for service in /etc/init.d/<service-name>; do sudo$service start; done

Sample:

for service in /etc/init.d/hadoop-*; do sudo $service start; done  Errors: Here are some errors we received, because I chose not to follow instructions or jumped over some steps. One thing I have to learn about Linux or Enterprise Systems in general is that “breverity in Instructions is sacrosanct” and you should make sure that you follow everything; or Google for help and hopefully someone else made the same mistakes and gave specific errors and resolution. Errors – HDFS-NameNode Here are HDFS Name Node errors. The log file is • Syntax –> /var/log/hadoop-hdfs/hadoop-hdfs-namenode-<hostname>.log • Sample –> /var/log/hadoop-hdfs/hadoop-hdfs-namenode-rachel.log Error due to name resolution error Specific Errors: • ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. • java.net.UnknownHostException: <hostname>: <hostname> • at java.net.InetAddress.getLocalHost(InetAddress.java:1466) Screen Dump:  ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'... java.net.UnknownHostException: rachel: rachel at java.net.InetAddress.getLocalHost(InetAddress.java:1466) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:496) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:435) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:431) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205) Caused by: java.net.UnknownHostException: rachel at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
... 9 more



Explanation

Error due to HDFS being in an un-consistent state


namenode join

Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state:

storage directory does not exist or is not accessible.

INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1



Specific Errors:

• Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state:storage directory does not exist or is not accessible.

Screen Dump:

Explanation

• Please go ahead and format the HDFS Name Node — This should be ran on primary NameNode:
sudo -u hdfs hadoop namenode -format


Errors – HDFS-DataNode

Here are HDFS Data Node errors.

The log file is

Error due to host name resolution error

Screen Shot:



[dadeniji@rachel conf]$cat /var/log/hadoop-hdfs/hadoop-hdfs-datanode-rachel.log 2013-05-13 15:24:32,357 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = java.net.UnknownHostException : rachel: rachel STARTUP_MSG: args = [] STARTUP_MSG: version = 2.0.0-cdh4.2.1 STARTUP_MSG: build = file:///data/1/jenkins/workspace/generic-package-centos32-6/topdir/BUILD/hadoop-2.0.0-cdh4.2.1/src/hadoop-common-project/hadoop-common -r 144bd548d481c2774fab2bec2ac2645d190f705b; compiled by 'jenkins' on Mon Apr 22 10:26:05 PDT 2013 STARTUP_MSG: java = 1.7.0_21 ************************************************************/ 2013-05-13 15:24:32,895 WARN org.apache.hadoop.hdfs.server.common.Util: Path /var/lib/hadoop-hdfs/cache/hdfs/dfs/data should be specified as a URI in configuration files. Please update hdfs configuration. 2013-05-13 15:24:33,962 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: rachel: rachel at java.net.InetAddress.getLocalHost(InetAddress.java:1466) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:223) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1694) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1719) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1872) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1893) Caused by: java.net.UnknownHostException: rachel at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
... 6 more

2013-05-13 15:24:33,987 INFO org.apache.hadoop.util.ExitUtil: Exiting withstatus 1

/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:
rachel: rachel
************************************************************/



Explanation

• Need to quickly go in and make sure we are able to resolve our host name; for this specific host; the host name is rachel
Error due to required service not running


WARN org.apache.hadoop.hdfs.server.common.Util: Path /var/lib/hadoop-hdfs/cache/hdfs/dfs/data should be specified as a URI in configuration files.

INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

system started

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is rachel

server at /0.0.0.0:50010

bandwith is 1048576 bytes/s

via org.mortbay.log.Slf4jLog

(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode

INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs

at 0.0.0.0:50075

INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075

INFO org.mortbay.log: jetty-6.1.26.cloudera.2

INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075

50020

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020

BPOfferServices for nameservices: <default>

WARN org.apache.hadoop.hdfs.server.common.Util: Path /var/lib/hadoop-hdfs/cache/hdfs/dfs/data should be specified as a URI in configuration files.

<registering> (storage id unknown) service to localhost/127.0.0.1:8020
starting to offer service

INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)



Explanation

• It looks like we are breaking when trying to communicate with host localhost, port 8020
• So what is supposed to be listening on port 8020
• So let us go make sure that Hadoop\Name Node is running and listening on Port 8020
Errors – MapReduce – Job Tracker

The log file is

Error due to MapReduce / File System Permission Error

Specific Errors:

• INFO org.apache.hadoop.mapred.JobTracker: Creating the system directory
• WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://localhost:8020/var/lib/hadoop-hdfs/cache/mapred/mapred/system) because of permissions.
• WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user ‘mapred (auth:SIMPLE)’
• WARN org.apache.hadoop.mapred.JobTracker: Bailing out …
• org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode=”/”:hdfs:supergroup:drwxr-xr-x

Screen Dump:



with processName=JobTracker, sessionId=
INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 8021

INFO org.apache.hadoop.mapred.JobTracker: Creating the system directory

permissions

WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned
by the user 'mapred (auth:SIMPLE)'

WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)  Explanation • Having HDFS File System permission problems • Are the folders not created or are they created and we are only having problems with the way they are privileged? • I remembered that there was extended coverage of HDFS Map Reduce folder permissions in the Cloudera Docs. Let us go review and apply those permissions Configuring init to start core Hadoop Services http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_27_2.html Stopping Hadoop Services http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_27_3.html Post Installation Review Services – Review Commands – service –status-all  sudo service --status-all | egrep -i "jobtracker|tasktracker|Hadoop"  Output: Commands – tcp/ip service (listening)  sudo lsof -Pnl +M -i4 -i6 | grep LISTEN  tried running lsof, but got error message:  ps -aux 2> /dev/null | grep "java"  Screen Dump (lsof: command not found): Once installed lsof (via instructions previously given) Output: Explanation: Explanation – Java • We have quite a few listening Java proceses • The java processes are listening on TCP/IP ports between 50010 and 50090; specifically 50010, 50020, 50030, 50060, 50070, 50075 • And, also ports 8010 and 8020 Explanation - Auxiliary Services • sshd (port 22) • cupsd (port 631) Commands – ps (running java applications)  ps -eo pri,pid,user,args | grep -i "java" | grep -v "grep" | awk '{printf "%-10s %-10s %-10s %-120s \n ",$1, $2,$3,  $4}'  Output: Interpretation: • With java app one will see -Dproc_secondarynamenode, -Dproc_namenode, and -Dproc_jobtracker –> This indicator obviously maps to specific Hadoop Services Operational Errors Operational Errors – HDFS – Name Node Operational Errors – HDFS – Name Node – Security – Permission Denied  mkdir: Permission denied: user=dadeniji, access=WRITE, inode="/user/dadeniji":hdfs:supergroup:drwxr-xr-x  Validate: Check the permissions for HDFS under /user folder:  sudo -u hdfs hadoop fs -ls /user  We received: Explanation: • For my folder, /user/dadeniji, my folder is still owned by hdfs. Let us go change it:  sudo -u hdfs hadoop fs -chown$USER /user/$USER  Validate Fix:  hadoop fs -ls /user/$USER


Output:

Operational Errors – HDFS – DataNode

13/05/16 15:59:08 ERROR security.UserGroupInformation: PriviledgedActionException as:dadeniji (auth:SIMPLE) cause

:mapred:supergroup:drwx——



13/05/16 15:59:08 ERROR security.UserGroupInformation: PriviledgedActionException as:dadeniji (auth:SIMPLE) cause
mapred:supergroup:drwx------
8)
odeProtocolServerSideTranslatorPB.java:643)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocki
ngMethod(ClientNamenodeProtocolProtos.java:44128)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)



When we issued:



sudo -u hdfs hadoop fs -ls  \



Explanation:

• For my personalized HDFS Staging folder (/var/lib/hadoop-hdfs/cache/mapred/mapred/staging/dadeniji), the permission set is rwx(——).
• To me it appears that the owner (mapred) is the only account that has any permissions.
• Cloudera Docs is very prophetic about these type of errors:

Installing CDH4 in Pseudo-Distributed Mode
Starting Hadoop and Verifying it is Working Properly:
Create mapred system directories
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Quick-Start/cdh4qs_topic_3_2.html
If you do not create /tmp properly, with the right permissions as shown below, you may have problems with CDH components later. Specifically, if you don’t create /tmp yourself, another process may create it automatically with restrictive permissions that will prevent your other applications from using it.

Let us go correct it:

As it is merely a staging folder, let us remove it, and hope the system re-creates:



sudo -u hdfs hadoop fs -rm -r \



Once corrected we can run the MapReduce jobs.

Technical : Microsoft – Windows Developer rant – “We are slower than other operating systems”

Like writers, programmers write because they have to. Whether creating software, proses, technical documentations, blogs, emails — Programmers do it, that is write, because they have to.

They have to express themselves..

It all started off with a small comment on Hacker News,  posted here:

https://news.ycombinator.com/item?id=5689391
There is not much discussion about Windows internals, not only because they are not shared, but also because quite frankly the Windows kernel evolves slower than the Linux kernel in terms of new algorithms implemented. For example it is almost certain that Microsoft never tested I/O schedulers, process schedulers, filesystem optimizations, TCP/IP stack tweaks for wireless networks, etc, as much as the Linux community did. One can tell just by seeing the sheer amount of intense competition and interest amongst Linux kernel developers to research all these areas.

The net result of that is a generally acknowledged fact that Windows is slower than Linux when running complex workloads that push network/disk/cpu scheduling to its limit: https://news.ycombinator.com/item?id=3368771 A really concrete and technical example is the network throughput in Windows Vista which is degraded when playing audio!http://blogs.technet.com/b/markrussinovich/archive/2007/08/2…

Note: my post may sound I am freely bashing Windows, but I am not. This is the cold hard truth. Countless of multi-platform developers will attest to this, me included. I can’t even remember the number of times I have written a multi-platform program in C or Java that always runs slower on Windows than on Linux, across dozens of different versions of Windows and Linux. The last time I troubleshooted a Windows performance issue, I found out it was the MFT of an NTFS filesystem was being fragmented; this to say I am generally regarded as the one guy in the company who can troubleshoot any issue, yet I acknowledge I can almost never get Windows to perform as good as, or better than Linux, when there is a performance discrepancy in the first place.

Like all good write up,  it continued here:

http://blog.zorinaq.com/?e=74
I contribute to the Windows Kernel.  We are slower than Operating Systems.  Here is why..

I was explaining on Hacker News why Windows fell behind Linux in terms of operating system kernel performance and innovation. And out of nowhere an anonymous Microsoft developer who contributes to the Windows NT kernel wrote a fantastic and honest response acknowledging this problem and explaining its cause. His post has been deleted! Why the censorship? I am reposting it here. This is too insightful to be lost. [Edit: The anonymous poster himself deleted his post as he thought it was too cruel and did not help make his point, which is about the social dynamics of spontaneous contribution.

However he let me know he does not mind the repost at the condition I redact the SHA1 hash info, which I did.][Edit: A second statement, apologetic, has been made by the anonymous person. See update at the bottom.]

“”"

I’m a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I’m posting through Tor for obvious reasons.

Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There’s almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world.

Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There’s no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business.

See, component owners are generally openly hostile to outside patches: if you’re a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn’t break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There’s just no incentive to accept changes from outside your own team. You can always find a reason to say “no”, and you have very little incentive to say “yes”.

There’s also little incentive to create changes in the first place. On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you’re praised and thanked. Here, if you do that and you’re not on the object manager team, then even if you do get your code past the Ob owners and into the tree, your own management doesn’t care. Yes, making a massive improvement will get you noticed by senior people and could be a boon for your career, but the improvement has to be very large to attract that kind of attention. Incremental improvements just annoy people and are, at best, neutral for your career. If you’re unlucky and you tell your lead about how you improved performance of some other component on the system, he’ll just ask you whether you can accelerate your bug glide.

Is it any wonder that people stop trying to do unplanned work after a little while?

Another reason for the quality gap is that that we’ve been having trouble keeping talented people. Google and other large Seattle-area companies keep poaching our best, most experienced developers, and we hire youths straight from college to replace them. You find SDEs and SDE IIs maintaining hugely import systems. These developers mean well and are usually adequately intelligent, but they don’t understand why certain decisions were made, don’t have a thorough understanding of the intricate details of how their systems work, and most importantly, don’t want to change anything that already works.

These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones. Look at recent Microsoft releases: we don’t fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.

(That’s literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn’t.)

More examples:

• We can’t touch named pipes. Let’s add %INTERNAL_NOTIFICATION_SYSTEM%! And let’s make it inconsistent with virtually every other named NT primitive.
• We can’t expose %INTERNAL_NOTIFICATION_SYSTEM% to the rest of the world because we don’t want to fill out paperwork and we’re not losing sales because we only have 1990s-era Win32 APIs available publicly.
• We can’t touch DCOM. So we create another %C#_REMOTING_FLAVOR_OF_THE_WEEK%!
• XNA. Need I say more?
• Why would anyone need an archive format that supports files larger than 2GB?
• Let’s support symbolic links, but make sure that nobody can use them so we don’t get blamed for security vulnerabilities (Great! Now we get to look sage and responsible!)
• We can’t touch Source Depot, so let’s hack together SDX!
• We can’t touch SDX, so let’s pretend for four releases that we’re moving to TFS while not actually changing anything!
• Oh god, the NTFS code is a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH for flow control. Let’s write ReFs instead. (And hey, let’s start by copying and pasting the NTFS source code and removing half the features! Then let’s add checksums, because checksums are cool, right, and now with checksums we’re just as good as ZFS? Right? And who needs quotas anyway?)
• We just can’t be fucked to implement C11 support, and variadic templates were just too hard to implement in a year. (But ohmygosh we turned “^” into a reference-counted pointer operator. Oh, and what’s a reference cycle?)

Look: Microsoft still has some old-fashioned hardcore talented developers who can code circles around brogrammers down in the valley. These people have a keen appreciation of the complexities of operating system development and an eye for good, clean design. The NT kernel is still much better than Linux in some ways — you guys be trippin’ with your overcommit-by-default MM nonsense — but our good people keep retiring or moving to other large technology companies, and there are few new people achieving the level of technical virtuosity needed to replace the people who leave. We fill headcount with nine-to-five-with-kids types, desperate-to-please H1Bs, and Google rejects. We occasionally get good people anyway, as if by mistake, but not enough. Is it any wonder we’re falling behind? The rot has already set in.

“”"

Edit: This anonymous poster contacted me, still anonymously, to make a second statement, worried by the attention his words are getting:

“”"

All this has gotten out of control. I was much too harsh, and I didn’t intend this as some kind of massive exposé. This is just grumbling. I didn’t appreciate the appetite people outside Microsoft have for Kremlinology. I should have thought through my post much more thoroughly. I want to apologize for presenting a misleading impression of what it’s like on the inside.

First, I want to clarify that much of what I wrote is tongue-in-cheek and over the top — NTFS does use SEH internally, but the filesystem is very solid and well tested. The people who maintain it are some of the most talented and experienced I know. (Granted, I think they maintain ugly code, but ugly code can back good, reliable components, and ugliness is inherently subjective.) The same goes for our other core components. Yes, there are some components that I feel could benefit from more experienced maintenance, but we’re not talking about letting monkeys run the place. (Besides: you guys have systemd, which if I’m going to treat it the same way I treated NTFS, is an all-devouring octopus monster about crawl out of the sea and eat Tokyo and spit it out as a giant binary logfile.)

In particular, I don’t have special insider numbers on poaching, and what I wrote is a subjective assessment written from a very limited point of view — I watched some very dear friends leave and I haven’t been impressed with new hires, but I am *not* HR. I don’t have global facts and figures. I may very well be wrong on overall personnel flow rates, and I shouldn’t have made the comment I did: I stated it with far more authority than my information merits.

Windows and Microsoft still have plenty of technical talent. We do not ship code that someone doesn’t maintain and understand, even if it takes a little while for new people to ramp up sometimes. While I have read and write access to the Windows source and commit to it once in a while, so do tens and tens of thousands of other people all over the world. I am nobody special. I am not Deep Throat. I’m not even Steve Yegge. I’m not the Windows equivalent of Ingo Molnar. While I personally think the default restrictions placed on symlinks limited their usefulness, there *was* a reasoned engineering analysis — it wasn’t one guy with an ulterior motive trying to avoid a bad review score. In fact, that practically never happens, at least consciously. We almost never make decisions individually, and while I maintain that social dynamics discourage risk-taking and spontaneous individual collaboration, I want to stress that we are not insane and we are not dysfunctional. The social forces I mentioned act as a drag on innovation, and I think we should do something about the aspects of our culture that I highlighted, but we’re far from crippled. The negative effects are more like those incurred by mounting an unnecessary spoiler on a car than tearing out the engine block. What’s indisputable fact is that our engineering division regularly runs and releases dependable, useful software that runs all over the world. No matter what you think of the Windows 8 UI, the system underneath is rock-solid, as was Windows 7, and I’m proud of having been a small part of this entire process.

I also want to apologize for what I said about devdiv. Look: I might disagree with the priorities of our compiler team, and I might be mystified by why certain C++ features took longer to implement for us than for the competition, but seriously good people work on the compiler. Of course they know what reference cycles are. We’re one of the only organizations on earth that’s built an impressive optimizing compiler from scratch, for crap’s sake.

Last, I’m here because I’ve met good people and feel like I’m part of something special. I wouldn’t be here if I thought Windows was an engineering nightmare. Everyone has problems, but people outside the company seem to infuse ours with special significance. I don’t get that. In any case, I feel like my first post does wrong by people who are very dedicated and who work quite hard. They don’t deserve the broad and ugly brush I used to paint them.

P.S. I have no problem with family people, and want to retract the offhand comment I made about them. I work with many awesome colleagues who happen to have children at home. What I really meant to say is that I don’t like people who see what we do as more of a job than a passion, and it feels like we have a lot of these people these days. Maybe everyone does, though, or maybe I’m just completely wrong.

Introduction

Access to running certain applications is restricted to the root user or users that are able to acquire administrative privileges.

Thus to successfully manage systems it is required to be able to login as the root account or one of the accounts that can act in its place.

Which processes can only be executed by “root” users?

These so called restricted modules have an s in the owner execute flag when viewed using ls -la.


--check /bin folder and list files that have the signature "-rws"
ls -la /bin/* | grep -i "\-rws"



There are a couple of things you want to note:

• You need to escape the – symbol when identifying -rws; you escape – character by using the back-slash (\)
• Notice that we are looking at the first three letters; which signify permission set for the owner
• r — the owner is able to read the file
• w — the owner is able to write\over-write the file
• s — this usually have x to indicate that the owner can execute the file.  When not x, but s it means whomever is executing this process takes on the role of the file’s owner

Taking on the root role via membership in the wheel group

By convention Linux uses a group name named wheel as a surrogate group that can take on the role of the Admin.

Where did the name wheel come from ?

http://en.wikipedia.org/wiki/Wheel_%28Unix_term%29

In computing, the term wheel refers to a user account with a wheel bit, a system setting that provides additional special system privileges that empower a user to execute restricted commands that ordinary user accounts cannot access.  The term is derived from the slang phrase big wheel, referring to a person with great power or influence.

What is the “Wheel Group”

http://en.wikipedia.org/wiki/Wheel_%28Unix_term%29

Modern Unix systems use user groups to control access privileges. The wheel group is a special user group used on some Unix systems to control access to the su command, which allows a user to masquerade as another user (usually the super user).

Adding user to the wheel group

Command Shell – Utility – usermod

To modify user accounts, Linux relies on the usermod utility.  Here are a few quick points:

• The file’s full name is /usr/sbin/usermod
• One can change the user’s home directory via the -d (–home) option
• One can change the user’s primary group via the -g ( –gid) option
• One can wholly replace the user’s group membership via the -G (–groups) option
• One can add to the user’s existing group by using the -a (–append) option
• One can change the user’s shell by using the -s (–shell) option
• One can unlock an account by using the -U (–unlock) option
Usermod – Add user to the wheel group

To add our user, myself, in this case to the wheel group, please do the following:


Syntax:

Sample:



Thanks goodness, you get good nice, indicative messages when the group or user name is not actualized on the system:

• user does not exist
• group does not exist

When things are good, we get no feedback.

Groups – Review User Group Membership

Get user groups


Syntax:

Sample:



Output:

Groups – List all users in a group

List all users in a group


Sample:

grep :grep ^wheel /etc/group | cut -d: -f3: /etc/passwd



Output:

Explanation of Script:

• The surrounding  means that the inner script be ran and the results internal preserved, and not displayed to the console
• What does the inner script do — grep ^wheel / etc/group — it says to get the line in /etc/group that starts with the wheel word.  In  its entirety that line reads “wheel:x:10:”
• The output of “grep ^wheel /etc/group” is piped “|” to the cut utility.  The syntax “cut -d: -f3″ says to get the third word using colon (:) as the delimiter   So when we ask for the first word of “wheel:x:10:”, we get back 10.  10 is obviously the GroupID for wheel
• Please note that you need the colons (:) around the inner script, without it I got extraneous row; like the code and output pasted below:

Code (code and console output):


Command:
grep -e  "grep ^wheel /etc/group | cut -d: -f3"  /etc/passwd

Output:

-------------------------------------------------------------------------------
Command:
grep -e  :"grep ^wheel /etc/group | cut -d: -f3":  /etc/passwd

Output:



Output (Screen shot):

Explanation:

• Without the colon (:), one will see the extra record for games.  Games group id is not 10, but 100

Ensure that wheel has sudo access via customization of sudoers

Why bother with sudoers?

If an account tries to access sudo without membership in the wheel group or the wheel group is not fully configured for sudo access via the sudoers file, then the error message pasted below will come up:

Output (Text):

<account> is not in the sudoers file.  This incident will be reported.


Output (Screenshot):

Email

Next time you, the root user, access uses your system, you will get a nice little notification telling you that you have a nice a little email waiting for you:

You have mail in /var/spool/mail/root


To view the email issue something like

tail /var/spool/mail/root


Screen shot:

The email sent by the gossip delegator is quite straight forward.  The areas covered includes:

• To — root
• From — dadeniji (in our case)
• Auto-Submitted: auto-generated
• Subject: **SECURITY information for <hostname>
• Mesage-Id: ******
• Date: *****
tail /var/spool/mail/root


Email Contents:



rachel : May 12 17:41:27 : dadeniji : user NOT in sudoers ; TTY=pts/2 ; PWD=/home/dadeniji ; USER=root ; COMMAND=/bin/ls



Using visudo

Launch visudo:

visudo


Look for the lines that reference the wheels group:

Shipped:



## Allows people in group wheel to run all commands
# %wheel        ALL=(ALL)       ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL


• The  statements are refreshingly well documented
• I will suggest that you un-comment the line references wheel, but does not make mention of NOPASSWD

Revised:



## Allows people in group wheel to run all commands
%wheel        ALL=(ALL)       ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL



Corrected:

Validation


sudo ls -la *

Output:

Now we issue sudo <command> and supply our account’s (dadeniji) password, we are good.

References

References – Bash

References – Grep Commands

Technical – Hadoop – Hive – What is the Version # of Hive Service and Client that you are running?

Introduction

Hadoop is a speeding bullet.  You look online, Google for things, try it out, and sometimes you hit, but often you miss.

What do I mean by that?

Well this evening I was trying to play with Hive; specifically using Sqoop to import a table from MS SQL Server into Hive.

A bit of background, my MS SQL Server table has a couple of columns declared as datetime.

Upon running the Sqoop statement pasted below:



--connect "jdbc:sqlserver://sqlServerLab;database=DEMO" \
--driver "com.microsoft.sqlserver.jdbc.SQLServerDriver" \
-m 1 \
--hive-import \
--hive-table "customer" \
--table "dbo.customer" \
--split-by "customerID"



The above command basically gives the following instruction set:

• Via JDBC Driver (jdbc:sqlserver) connect to SQL Instance (sqlServerLab) and database Demo
• JDBC Driver’s Class name – com.microsoft.sqlserver.jdbc.SQLServerDriver
• Number of Map Reduce Jobs (m 1)
• Sqoop Operation — hive-import
• Hive Table — customer
• SQL Server Table — dbo.customer
• Split-by — customerID

I noticed in the Sqoop console log output statements a couple of warnings:



INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM dbo.customer AS t WHERE 1=0

WARN hive.TableDefWriter: Column InsertTime had to be cast to a less
precise type in Hive

WARN hive.TableDefWriter: Column salesDate had to be cast to a less
precise type in Hive



Processing

Explore MS SQL Server

So I quick went back and looked at my SQL Server Table:

  use [Demo];
exec sp_help 'dbo.customer';


Output:

The output is congruent with my thoughts:

• The InsertTime is a datetime column
• The salesDate is a datetime column

Explore Hive

Launch Hive:

In shell, issue “hive” to initiate Hive Shell:


hive

List all tables:

To confirm that a corresponding table has been created in Hive, uses list


show tables;

Output:

Display Table Structure (customer):

Display table structure using describe:


Syntax:
describe <table-name>;

Sample:

describe customer;

Output:

Explanation:

• So it is obvious that our two original MS SQL Server Date columns (Inserttime and salesdate) were not brought in as Datetime, but String

So I am thinking why?

Hive Datatype Support

I know that the Timestamp column was not one of the original datatypes supported by Hive.  It was added per Hive version 0.8.0

This is noted in:

HortonWorks – Hive – Language Manual – Datatypes
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.0.2/ds_Hive/language_manual/datatypes.html

Determine Hive Version

There are a couple of ways to get the Hive’s Server and Client Version Number

Determine Hive Version – Command Shell – Using ps

issue ps -aux



ps -aux | grep -i "Hive"



Output (Screen shot):

Output (Text):



410      13853  0.0  0.2 2207844 22824 ?       Ss   Apr15   0:00 postgres: hive hive 10.0.4.1(56963) idle

410      13854  0.0  0.1 2206552 8388 ?        Ss   Apr15   0:00 postgres: hive hive 10.0.4.1(56964) idle


• We have 4 processes bearing the “hive” name

Service Process

• It is identifiable as a Hive Service via its name hive-service*.jar
• It is running under the “hive” account name.  Its Process ID is 13767.  One of the Jar files referenced is /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive/lib/hive-service-0.10.0-cdh4.2.0.jar
• The Cloudera Version# is 4.2 and Hive Version# is 0.10

Client Process

• It is identifiable as a Hive Client via its name hive-cli*.jar
• It is running under my username (dadeniji), as I kicked it off.  Its Process ID is 18749.  One of the Jar files referenced is /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/hive/lib/hive-cli-0.10.0-cdh4.2.0.jar
• The Cloudera Version# is 4.2 and Hive Version# is 0.10

“Postgress” Process

• Hive’s uses an embedded postgress database
• The processes are running under account 410

Determine Hive Version – Cloudera Manager Admin Console & Command Shell

• Launch Web Browser
• Connect to Admin console ( http://<clouderaManagerServices>:<port>).  In our case http://hadoopCMS:7180; as Cloudera Manager Service is running on a machine named hadoopCMS and we kept the default port# of 7180
• The initial screen displayed in the Service Status page (/cmf/services/status)
• Click on the service we are interested in (hive1)
• The service’s specific “Status and Health Summary” screen is displayed.  In this case “Hive1 – Services and Health Summary” page
• In the row labelled “Hive MetaStore Server” Click on the link underneath the “Status” column
• This will bring you to the “hivemetastore” summary page.
• For each Hive host, Hive process information and links the Hive Logs are displayed
• On the “Show Recent Logs” row, click on “Full Stdout” log
• The stdout.log appears – Here is break of what is provided

stdout.log

 

Mon Apr 15 21:06:24 UTC 2013
using /usr/java/default as JAVA_HOME

using 4 as CDH_VERSION

using /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive
as HIVE_HOME

using /var/run/cloudera-scm-agent/process/22-hive-HIVEMETASTORE
as HIVE_CONF_DIR

Starting Hive Metastore Server


Java version

We quickly see that JAVA_HOME is defined as /usr/java/default.

To see what files constitute /usr/java/default

  ls /usr/java/default

Output:

Explanation:

• /usr/java/default is symbolically linked to /usr/java/latest
• /usr/java/latest is symbolically linked to /usr/java/jdk1.7.0_17
Cloudera Distribution version

Based on the screen shot below, the CDH Version is 4

using 4 as CDH_VERSION
Hive Home

Based on the screen shot below, the Hive Home is /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive

using /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive as HIVE_HOME

Again, let us return to the command shell and see what files are in /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive

Please add /lib suffix to get to the Jar files and only get only jar files that have hive in their names.



ls /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive/lib/hive*.jar



Output:

Cloudera Manager Admin Console – Service Status

Cloudera Manager Admin Console – Hive1 – Status and Health Summary

Cloudera Manager Admin Console – Hive1 – Status Summary

Cloudera Manager Admin Console – Hive1 – Status Summary – Log – Stdout.log

Conclusion

It thus appears that we are running a version of Hive (0.10) in this case that it did not support the TimeStamp datatype.

The problem can also be with the version of Sqoop we have running or Sqoop’s ability to detect SQL Server’s datetime datatype or datetime data representation in general.

Technical: Hadoop – ZooKeeper – Client (Cloudera)

Introduction

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_21.html

ZooKeeper is a high-performance coordination service for distributed applications. It exposes common services — such as naming, configuration management, synchronization, and group services – in a simple interface so you don’t have to write them from scratch. You can use it off-the-shelf to implement consensus, group management, leader election, and presence protocols. And you can build on it for your own, specific needs.

What are we trying to do

Review the ZooKeeper client bundled with Cloudera Hadoop distribution.  The tool is known as ZooKeeper-Client.

Configuration Validation

Folder – /etc/init.d/*cloudera*


ls -la /etc/init.d/*cloudera*


Screen Shot:

Service – Status all



sudo service --status-all 2> /dev/null | grep -i "cloudera"



Screen Shot:

ZooKeeper Client – Launch Shell


zookeeper-client


Output:

(press enter) to get a command entry point

ZooKeeper Client – Help

To get a listing of commands:


help


Output:

ZooKeeper Client – Quit

To close the shell, issue the Quit command.


quit


Output:

ZooKeeper Client – Connect

To connect to another ZooKeeper, issue Open :



Syntax:
connect <hostname>:<portNumber>

Sample:



Output:

ZooKeeper Client – ls

ZooKeeper primary object are folders and files.  To get a list of folders and files, issue:

Syntax:

ls <folder-name>
Sample:

ls /
ls /hbase
ls /zookeeper

On a base Zookeeper install, the base folders are /hbase and /zookeeper.

Output:

ls /hbase

ls /zookeeper

ZooKeeper Client – Create Folder

Syntax:

create <folder-name> <Associated-ID>
Sample:

create  /corporate corp
create  /corporate/HR  corpHR

ZooKeeper Client – Remove Folder

Syntax:

rmr <folder-name>
Sample:

rmr  /corpSec8

ZooKeeper Client – getAcl

To get permissions issue the getAcl command.

To get permission set for folder /advert:

Syntax:

getAcl <folder-name>
Sample:

getAcl  /corporate/HR

Output:

Explanation:

• Scheme -> world
• User -> anyone (default and only allowable user)
• Permission –> crdwa

To get permission set for folder /advert:


Syntax:

getAcl <folder-name>
Sample:

getAcl  /advert

Output:

Explanation:

• Scheme -> digest
• Permission –> crdw

ZooKeeper Client – setAcl

To set permissions issue the setAcl command.

We have included setAcl commands as simply an engineering exercise.  I will discourage employing them for the following reasons.

• The folder can become totally inaccessible
• They are indomitable

Indomitable:

• They can not be removed  - There is no resetAcl API
• They are not cumulative

ZooKeeper Client – setAcl – Scheme (Host)

To set permission for specific Hosts or hosts that are in same domain, use:

Everyone whose hosts name has the corp.com moniker:

Syntax:

setAcl <folder-name> <host>:<domain-name>:<permission-set>

Sample:

setAcl /advert host:corp.com:crwda

The host whose FQDN name is appServer1.corp.com:

Syntax:

setAcl <folder-name> <host>:hostname:<permission-set>

Sample:

setAcl /advert host:appServer1.corp.com:cdrwa

ZooKeeper Client – setAcl – Scheme (IP Address)

To assign all permissions to a specific IP Address {10.0.4.70}:

Syntax:

Sample:

setAcl  /corpSec7 ip:10.0.4.70:cdrwa

To validate that things are good, issue getAcl:

To review the Permission set, use getAcl:

Syntax:

getAcl   <folder-name>
Sample:

getAcl  /corpSec7

Output:

ZooKeeper Client – setAcl – Scheme (World)

Anyone

For the following use case scenario:

• Folder -> /corporate
• Authentication Provider -> world
• User –> anyone (The only valid user is the “anyone” user)
• Permission -> crwda
Syntax:

setAcl <folder-name> <scheme>:<permisson-set>

Sample:

setAcl /advert world:anyone:crdwa

Output:

ZooKeeper Client – setAcl – Digest Authentication

To allow anyone within our local network the ability to use the /corporate/HR folder, do the following:

For the following use case scenario:

• Folder -> /corporate
• Authentication Provider -> digest
• Permission -> crwda
Syntax:

setAcl <folder-name>:<scheme>:<permisson-set>

Sample:

setAcl /corporate digest:dadeniji:waTER:crdwa

Output:

For the following use case scenario:

• Authentication Provider -> digest
• Permission -> crwd
Syntax:

setAcl <folder-name>:<scheme>:<permisson-set>

Sample:

setAcl /advert digest:dadeniji:safetec:crdw

Output
:

ZooKeeper Client – Stat

Get folder Stats:

Syntax:

stat <folder-name>

Sample:

stat  /hbase
stat  /zookeeper

Output:

Error Messages:

Error Message – NoAuthException


Exception in thread "main" org.apache.zookeeper.KeeperException\$NoAuthException: KeeperErrorCode = NoAuth for /corpSec7
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1375)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:733)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:593)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:365)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)

`

Explanation:

• One always has to be careful when setting permissions
• And, it seems once they are set, it is difficult to change them

Logging:

Logging – Log File – Location

ZooKeeper log files are kept in /var/log/zookeeper

Logging – Log File – Name

The naming convention for log file is:

zookeeper-cmf-zookeeper1-SERVER-<FQDN>.log

References:

References – Programmer

References – Zookeeper & SASL

References – Mailing List