Favorite Languages

Here are some computer languages that I like.

I kind of got bored of scrolling down and reading all those email messages from Financial Economists Network (FEN). Wouldn't it be nice to create a script that will parse through all the email messages and spit out only the relevant information for a reference management software such as EndNote? It turns out the Practical Extraction and Reporting Language (PERL) is such a handy tool that we can not live without as far as scripting is concerned. Here are some resources I dug up.

HTMLITE provides some very nice and intuitive explanation on the basics of working with PERL. It's totally example driven, so it should be a great starting point for anyone who has no scripting experience before. CPAN provides some very detailed documentation's about PERL, especially its frequently asked questions section. The major drawback of this site though is its unsuitability to beginners. For the documentation of many PERL functions, try PERLDOC. For a warm-hearted and easy-to-understand treatment of PERL, check out the free beginner's material at learn.perl.org. Highly recommended. You may also want to check out the PERL You Need to Know. After a while, you might have to struggle a little with the unicode stuff. Maybe you event muster enough courage to write a perl module for CPAN. The free book by Sam Tregar titled "Writing Perl Modules for CPAN" does a great job explaining the little details. Whenever in doubt, you might want to tap into the wise Perl Monks for active community support.

Here is some brief information (by Jatin Sethi and Levon Lloyd) on how to automatically downloading and parsing SEC filings from EDGAR. The topic is quite interesting but the code provided was still in progress. Kambil provides an NSF report on the EDGAR system, and another documentation (joint with Ginsburg) on how to use PERL to access the EDGAR. The most useful bit of public information regarding the EDGAR system, in my opinion, is the source codes used for creating the EDGAR in its infant stage. They are provided by the Internet Multicasting Service, one of the originators of the EDGAR.

Okay, here is my virgin code on PERL. Obviously the efficiency can be significantly improved further, but it does parse 30MB worth of raw data within half a minute. The basic steps involved are: select all the email messages from FEN in Outlook 2003 and use your left mouse button to drag the email messages into the TASK panel on the left; a new task note will be generated automatically, containing all the email messages from FEN; copy the entire content from the task note and paste it into a text file, say "testfile.txt"; put the text file in the same folder as the PERL script; run the script and the final output "testout6.txt" contains all the fields related to each paper, such as title, author, source, date, abstract, volume, issue, etc. Download a modified version of the JSTOR filter for EndNote from here, and you can easily import all the cleaned-up references into Endnote.

Still not convinced of the power of PERL? Here is an example. In a recent research project, the combination of a PERL script (that downloads and parses the data) and a shell script (that allows invoking multiple PERL processes at once) allowed me to get the data on more than 55,000 auctions from eBay within seven minutes. If you were to download those data manually from eBay, it is more likely taking you more than seven minutes to clean up even one auction in Excel. I admit that it is probably not fair to boast the speed without giving details on the hardware (it was run on the WRDS server), but you get the idea of how fast it can be.

Here is another case demonstrating the usefulness of PERL script. Sometimes I have to run a bunch of sas jobs on the WRDS server in a chained-up fasion because WRDS allows us to run at most three sas jobs at a time. Each of the jobs takes many hours (unknown in advance) to complete and has output files of large size. It is essential to retrieve those large files to my local work station as soon as possible because WRDS automatically deletes the files that are two days old and because it might run out of work space even in the temporary folder. So, I wrote a PERL script to constantly check the status of the jobs and send me an email upon the completion of each job. Use a line like follows inside a loop to do the checking $pscheck=`ps -ef |grep "job$batchid.sas" |grep -v grep |wc -l`; and use the following line for an email notification system("echo \"done\" \| mailx -s \"job$batchid\.sas\" someone\@somewhere\.com") if ($pscheck==0); Note that the job name is something like job125.sas in my case. In case you don't already know it, you can download the ActivePerl for free. Try it out. It is great!

Here is the most advanced scenario I have encountered so far. Suppose that you suddenly became the only heir of a multi-millionaire, and would be able to claim more than $20 million in total. The late millionaire wrote the will in such a way, however, that you have to withdraw the money in a sequential fashion. That is, you are not allowed to withdraw more than $150 per day, and each time when you withdraw the money you have to fill out the withdrawal paper form after going through the bank door guarded by a dog with nine heads. Sounds unreasonable? Well, what do you say to a large database that is open to the public with a web interface but has a size limitation (150KB) on how much data you can get from it per retrieval? If you think of hiring a robot to get the money faster, then you are on the right track and there are many tools in PERL that can aid you. But not so fast, do you remember the nine-headed dog? Some public databases have Java scripts and other fancy technologies to hide the links, which gives a hard time for the standard PERL modules such as LWP, MECHANIC, etc. The problem is that PERL does not have a java engine to adequately evaluate those Java scripts. Who can help us out of this mess? Meet samie, or Simple Automation Module for Internet Explorer. This powerful tool makes it possible to write PERL scripts to interact with Internet Explorer and leave Internet Explorer to interpret the java script. The downside is that you have to swallow your pride and stick with M$IE. Oh, you will hear the repeated clicking sound too, if you turn on the audio. Try it out and it could be your big day. If you run into something Samie has not yet covered, you may want to check out the M$ references for Internet Explorer and the dhtml DOM sites.

You may want to update the CPAN package by typing perl -MCPAN -e "install CPAN". Alternatively, you can type perl -MCPAN -e shell and then install MD5. Type install Bundle::CPAN to get the latest CPAN package and type reload cpan to refresh cpan. You may find the need to have HTTPS support for the LWP module. Install the Crypt::SSLeay module by typing ppm install http://theoryx5.uwinnipeg.ca/ppms/Crypt-SSLeay.ppd for ActivePerl 5.8.8 or typing ppm install http://cpan.uwinnipeg.ca/PPMPackages/10xx/Crypt-SSLeay.ppd for ActivePerl 5.10.0. Also check out the great article by Bret H Swedeen, highlighting necessary steps for secure website access with perl. The WWW::Mechanize was utilized for this purpose.

As it turned out, the more I use WWW::Mechanize, the better I like it. It allows for easy yet powerful rendition of web automation, making web re-directing, frames, filling out forms and other tedious tasks extremely easy. The cookies handling is done automatically, and you can use basic authentication by declaring use PerlIO::encoding; and use MIME::Base64;, and then issuing lines such as my @basic_auth=(Authorization=>"Basic" . MIME::Base64::encode($username . ':' . $password)); After that you can then add the header $agent->add_header(@basic_auth); so that all the GET and POST tasks automatically have the authentication done. You can easily save the intermediate forms data for close perusal, or you can use TamperData (a useful FireFox extension) to decipher the communications between your browser and the remote server) so as to better fill out the forms. There are lots of ways to fill out the forms using the commands provided by Mechanize, and you are also allowed to manipulate the web forms manually so that Mechanize use this updated form to post the remove server. For example, you may have a form with a huge drop down menu from which you can select various values. Instead of going through all of them one bye one and setting the corresponding values. You can easily select all by saving the forms content my $contentnow = $agent->response->content(); doing a search and replace to reset the selection values $contentnow =~ s{(<OPTION VALUE="[a-z0-9]{10}")>}{$1 SELECTED>}gi; and finally issuing the command $agent->update_html ($contentnow); to update the form that Mechanize uses internally. There is about only one thing the Mechanize is not designed to do, to handle java scripts. If you are interested in checking out this powerful tool, make sure to upgrade to the latest version of Mechanize first. You can type perl -MCPAN -e shell to launch the interactive CPAN, and then type install WWW::Mechanize to get it done. You may have to install other related modules as well.

Sometimes you need a little sorting to work in your favor. Read the simple but effective illustration by Perfect Solutions. If it does not yet solve your problem, then read on the various listings related to sorting on Perl Monks, especially this one about sorting on multiple columns. If you have the time, you should also check out this article a fresh look at effecient perl sorting by Guttman and Rosler. It is not easy to read, but provides in-depth explanation on various ways of conducting effecient sorts inside perl.

One of the research fields that I am interested in is market microstructure, and the empirical work in this particular field makes good commanding skills in SAS a necessity. Without some knowledge of base SAS and SAS/SQL, it proves almost impossible to bite on the New York Stock Exchange Trades and Quotes (NYSE TAQ) database, which contains voluminous information regarding each trade and quote occurred on the stock markets in the US since 1993. For instance, the consolidated quotes for all stocks in January of 2007 are more than 120GB in SAS format.

I picked up my SAS skills from doing RA work (yes, there is actually some synergy from doing the tedious RA work!) during my PhD days, while crawling with the aid of the Little SAS Book by Delwiche and Slaughter. Later on, I realized the importance of using macro in SAS and eventually grasped it well by reading the SAS Macro Reference from cover to cover. I learned the generic version of SQL from SAM's Teaching Yourself SQL in 21 Days. Obviously, it took me more than 21 days, with the help of the SAS Procedures Reference on SQL by SAS Institute.

There are plentiful web resources regarding SAS programming, and here are my favorite picks. Whenever you feel the need to consult a reference manual, go to the SAS Online Documentation version 9. Note that sometimes it is easier to read the PDF version of documentation. The Technical Support site at the SAS Institute is also a nice place to search for help. Do you want to learn SAS from watching a bunch of Flash videos? Check out the SAS tutorials at TAMU. Do you want to execute operating system commands from your sas session? Here are the commands. Also check out this SUGI paper for useful macros that automate the SASTASK COMMAND. Do you want to figure out all the details about how to join tables in SQL? Read the page here.Do you feel the pain of dealing with the date and time formats in SAS? Here is the relevant portion of the reference manual. And here are the links to the ETS/MODEL procedure and the corresponding ODS table names. Sooner or later, you may encounter files with foreign characters so a brief look at the characters encoding proves helpful. Alternatively, you should check out the User's Guide on National Language Support (NLS).

Do you want to figure out the ins and outs of data conversion using SAS PC files server? You should refer to the SAS/ACCESS 9.3 Interface to PC Files Reference or the easier-to-read PDF file that should help you swim around Excel files, Stata files etc.

Ever wonder where to get a copy of the sample codes accompanying the Little SAS Book? Here is the repository of all programs associated with the Books By Users published by SAS Institute. I would call it a real treasure planet, as opposed to the disney one. Are you wondering if there is a better way to handle the tedious and/or complex job? Check out the archive of advanced tutorials at the SAS Global Forum, formerly known as SAS Users Group International or SUGI. How do you sign on the Wharton Research Data Services (WRDS) using PC SAS? Use a remote submission to allow your SAS program running at the Unix server at WRDS, while seeing all the outputs right in front of your PC. It's ideal for debugging portions of your complex program. Do you think you have a problem hard to swallow? Search the SAS group from Google and you may be surprised that someone already had an answer to your ugly problem. If you are already a SAS user but for some reason you want to use a STATA command, here is your shortcut: Stata for the Struggling SAS Mind.

Sometimes we have to conserve storage space when dealing with humongous databases such as NYSE TAQ. SAS allows us to read in the compressed raw file on the fly and this can be done under both UNIX and Windows. The excellent Academic Technology Services (ATS) at UCLA provides great tips and examples on how this can be achieved. Essentially, we take advantage of the Unix/Windows versions of info-zip to uncompmress the archives into pipes. Just in case the default ftp site for the Info-Zip.org is down, you can use this alternative site. You should look for the file named "unz552xN.exe", where 552 refers to the version number and thus a higher version number should not hurt. You can also use the local copy stored on my server.

Here is a tip on how to work with the NYSE TAQ database hosted on WRDS. Given that the consolidated quotes are getting bigger in file size by the day (or, 120GB for January 2007 alone), it is fairly hard to loop through the monthly files at one gulp because there is simply not enough temporary storage space available from WRDS for this purpose. Therefore, it becomes paramount to implement a loop that will shorten the access time. Here is an idea with proven effectiveness. Read through the trades file one month at a time, and create a summary file documenting the ticker symbol and trading day combinations with corresponding total number of trades in each day. Use this summary file to divide all the symbol/date combinations into 50 (or less for early years) segments, each containing roughly the same number of trades. Document the segment number, the ticker symbol and the date range in a separate lookup file with its index generated. Loop through both consolidated trades and quotes files for the same month according to each combination of ticker symbol and date range, and allow the matched/cleaned outputs to be appended according to the segment number. This way, you will get 50 cleaned taq files for each month that can be quickly stitched back together. It will take some time to generate the key lookup file and its index (about 13 minutes for January 2007 on WRDS), but this approach allows you to pick one symbol/date block at a time, without having to filter through the entire database at each round. In my experience, it quickly got rid off the problems of insufficient temporary storage space and slow cleaning process one month at a time.

The SAS Consulting Interest Group provides about 400+ tips and tricks of programming in SAS. Although the PROC EXPORT is often useful for creating delimited files that can be imported in a different statistical software, this powerful procedure is not available to all operating systems. For instance, it is not available in LINUX. The Technical Support site at the SAS Institute provides a robust macro to convert SAS datasets into delimited files.

Sometimes it is useful to share the unix folder with someone else and you can use the accessl control list (ACL) to achieve this result. Use the setfacl command to modify the acl for a certain file and use the command getfacl to view the acl. Here is an example of giving the total control (read, write and execute) to a certain folder: setfacl -rm user:iamluck:rwx /myfoldername/*.

Also check out these excellent research computing sites for links to helpful articles on SAS topics.

If you don't mind some ready-to-serve statistical packages, STATA is the top pick in this category.

Thinking about dealing with raw data using SAS and performing statistical analysis in STATA? You are not alone. That's why there is a good site named A SAS User's Guide to STATA. Okay, we all need to start from somewhere. How about some basic data management in STATA? Very serious about programming in STATA? Why not joining the listserver? Or take a peek at the resources for learning STATA compiled the vendor. Do yourself a favor and check out these movies designed to help you learn the basic concepts in STATA. I also find these notes on Applied Econometrics very useful.

Personally I believe the best way of learning any programming language is to read the user's guide from cover to cover and then work with lots of examples on your own. Here is a copy of my notes from reading the STATA User's Guide Version 8. It has five pages in total, abstracting from the 300 pages strong, so it may not fit you well. But I suggest you writing a set of reading notes of your own, if you want to learn it fast.

So I suppose that you want to copy those nice regression results into a table, if only you can bear with the pain of copying and pasting individual cells, putting up the asterisks and adding all those parentheses. Well, there is one less painful way to achieve that. First of all, it is worthwhile to take a look at a nifty text editor called UltraEdit, with which you can select individual columns (not just rows) of results to copy and paste to Excel. Second, you need to know how to use the Text-to-Column (T2C) conversion feature in Excel. In fact, I always create a little button on the toolbars so that I can press it and make T2C happen. Third, select all the numbers and format them nicely so that they have the right number of decimal places. Fourth, copy the rows/columns that you designate to hold standard errors or p-values, and paste them to your favorite text editor. They should all carry the right number of decimal places in the text editor. Now go back to Excel and format these cells as text. You will see numbers with undesired format showing up, left aligned. Before you yell "That's not what I wanted!", copy those neatly formatted numbers from your text editor back to the right places in Excel. Well, they now have the right number of decimal places, but "Still not what I wanted." While these numbers are still highlighted, format them yet again. This time, choose the Custom format, and type (@) in the box showing up. Wow! They now all carry parentheses!! Make them right-aligned and you are ready to use the Format Brush to apply the format to similarly designated rows/columns. Now you are ready to copy and paste the entire table from Excel to Word, and you don't have to punch in all those parentheses one by one. If you are patient enough to follow these directions and finish a table once, you will never punch those parentheses manually again.

Here is an even better way of dealing with the regression results. Use the OUTREG command from STATA, with the following set of options nolabel 3aster noparen adjr2 bdec(3) tdec(3) rdec(3) comma, to generate a CSV file. Start a new Excel worksheet, select the entire active worksheet, and format every cell into Text format. Copy the regression results from the CSV file, pick any cell in the active worksheet, choose Paste Special and select Text format. Now you have all the results nice and tidy, except that you perhaps want those stars on a separate column. If this is the case, download a macro I wrote for UltraEdit from here. Copy the results from the Excel worksheet to UltraEdit, choose Macro >> Load ... to load the repstar macro, and press CTRL+SHFT+R to make the stars appear on separate columns. Copy the cleaned data from UltraEdit back into Excel, and you will find the magic has worked yet again.

What is the best way of all? In my biased view, nothing works better than writing a short piece of PERL script to pick off the right stuff of the output log file. Knowing a bit about the powerful regular expression in PERL can simplify your life substantially when you are dealing with regressions of multiple groups. The more complex the output log looks, the more advantage it is to use the PERL script. Here is a sample PERL script I wrote for this purpose. If you don't know how to modify the script, then it's worthless to you. Otherwise, it could be very valuable.

If you hate the default text editor with STATA, you should be happy to know that you can integrate some external editors such as UltraEdit with STATA. Check out the page by Friedrich Huebler.

You may find someday the need to implement bootstrapping procedures and the existing bootstrap command in STATA does not give you enough mileage. Then you will definitely have to learn the programming details in STATA so that you can write customized functions. Moreover, the uniform() random numbers may not exactly be what you wanted either. In this case, I suggest you to read the Chapter 7 of a book titled "Elements of Statistical Computing" (volume 2) by Ronald Thisted. A draft copy of this chapter can be downloaded here, and it gives some historical notes on random number generation as well as a list of algorithms that help you to get non-uniform random numbers on the basis of uniform.

If you want to become more knowledgeable on STATA graphics, I would recommend you to check out the GeoCenter site dedicated for data analysis and visualization using STATA. I am particularly fond of the STATA cheatsheets those authors provide.

  1. Basic STATA Commands
  2. Data Transformation
  3. Types of Plots
  4. Plot Customization
  5. Satatistical Analysis
  6. Intro to STATA Programming

A whole bunch of TeX users, ranging from novice like me to guru like Knuth, take pride in the complexity of TeX input and the beauty of TeX output. Some people went as far as saying "my dog doesn't know how to do TeX," in defending their preference of TeX over more casual word processors. Yes, there's a steep learning curve associated with TeX. No, you don't have to be that arrogant. This short passage tries to give you a less painful way of using TeX, especially suitable to new users like me.

First things first, you need to obtain some software before you can start. There are many variations of TeX editors/compilers available. Perhaps the most frequently recommended set of such tools is WinEdt as an editor and MiKTeX as a compiler. You can obtain MiKTeX free of charge at www.miktex.org and download a trial copy of WinEdt at www.winedt.com. Both of them carry pretty detailed instructions on how to make an appropriate installation. If you want a freebie version of TeX editor, try to search at www.google.com and you will be surprised by the result.

If you don't like the philosophy of "throwing the baby into the bath water," you may want to take a quick glance at some fairly good introduction to LaTeX, one of the most popular variations of TeX. I strongly recommend you reading "The Not So Short Introduction to LaTeX 2e," also known as "LaTeX 2e in 95 Minutes," by Tobias Oetiker, Hubert Partl, Irene Hyna and Elisabeth Schlegl. Depending on your time availability and seriousness about graphics, you may or may not want to read the rather thorough book "Using Imported Graphics in LaTeX 2e" by Keith Rechdahl. For other useful TeX resources on the internet, search www.ctan.org or www.google.com.

Here comes the not-so-conventional approach that I advocate in using TeX. Yeah, I like the beautiful TeX output. Yeah, I hate the messy TeX input. Yeah, I can have the cake and eat it too, or, I can retain the beautiful and purge the ugly at the same time. You can too. How?

Most people know how to use M$ Word for basic word processing. If you ever come to the point of writing some equations in your document, however, you inevitably feel the pain of using the default Equation Editor in M$ Word. My suggestion to you is to dump it immediately and go for the god father of equation editor, something called MathType, a macro package native to M$ Word. In fact, the company designed MathType licenses Equation Editor to M$ for bundling with Word and PowerPoint in order to fulfill some basic needs for equation editing. The same company also licenses different versions of the equation editors for other popular word processors such as WordPerfect. MathType is the commercial and much more powerful version of the equation editor. Not yet convinced? Download a trial copy of MathType and I bet you will be hooked soon, just like I was. It is really easy and convenient to use MathType in conjunction with M$ Word, not to mention its TeX-like look of the math font (called Euclid) it provides. Even if you eventually decide not purchasing it, it's good to install the free Euclid font to your computer for two reasons: many people are using MathType so that you can communicate with them and the Euclid font really looks good.

I like the combination of M$ Word and MathType because it's really easy to use both products simultaneously. More importantly, both of them deliver WYSIWYG (what you see is what you get), not some arcane slashes and weird texts that you are stuck with when using TeX. Simply put, it's much more intuitive. The thing I like the least about TeX is its poor handling of graphics and tables. To the contrary, you can become an expert on the graphics drawing tools in M$ Word almost instantaneously. With the good-looking Euclid font from MathType, you can generate very serious figures and tables in a very simple way, not to mention the interoperability of Word and Excel etc. Many of you may feel satisfied already at the combination of Word and MathType, but not me, the one "wants to eat the cake and have it too."

To reiterate, I like the beautiful TeX output and hate the messy TeX input. Here comes the life saver, Word2TeX, a very handy tool you can use to save the Word document into a TeX one, automatically converting all the equations and other formatting information. Just point and click, you are done. Well, almost. You can download a copy of it for trial from www.word2tex.com and witness its ease of use by yourself.

The small caveat is that Word2TeX doesn't handle graphics well enough, although it attempts converting all the figures. Here is my simple advice on how to get the high quality graphics processed easily. If you haven't done so already, pull all your sophisticated graphs and tables out of the main text and put them at the end portion of the document. Print from Word to the Adobe Acrobat (I mean the full Acrobat, not the baby version Adobe Reader), which is widely available at least in the academia, to generate a high resolution PDF file. You want to make sure that the resolution is no less than 600 dpi. Open this PDF file from Adobe Acrobat and extract (choose Documents >> Extract) the pages containing those graphs and tables, press CTRL+T (yes, I am a user of PC, not Mac) to access the crop tool so as to select the appropriate range covering each graph or table and discard the rest. Save each individualized graph/table into a new PDF or EPS file from Acrobat. (Yes, you will get very small files with very high resolution in doing so.) You can then easily use the "\includegraphics" command in TeX to embed those graphs and tables.

My suggestion for the whole process? Work with Word/MathType on your early drafts of the document you are preparing, at which stage you should focus on the content, not being forced to pay attention to the format. Once you are satisfied with the content, use Word2TeX to convert it into TeX file while taking care of the graphics and tables in the way mentioned above. Work with WinEdt/MikTeX to perform the polishing touches. When finally facing the professional-looking printout, you can say "I am done, and in such an easy way." Next steps? If you want to make your document really fancy, go to search the TeX group at the GROUPS of www.google.com for a wide variety of additional resources. The LaTeX community web forum is a good place to check out as well.

For those of you who miss the Track Change feature in M$ Word that can be useful when collaborating with someone else, all is not lost when you switch into the TeX camp. You can run a set of PERL scripts to compare the difference between two TeX files and generate a new TeX document with the differences highlighted in color. For more details, see the LaTeXdiff package. Note that you need to install the free PERL on your workstation first in order to take advantage of this neat tool.

On the front of commercial TeX packages, I recommend Scientific Workplace by MacKichan. Here are some notable benefits from using this particular product. First, you have quite some good-looking templates to choose from. Second, you can actually save a lot of time when dealing with graphics because it supports various graphic formats and all you need to do is to copy and paste. If you want to modify the size and frame information for the graphics, just right click on the target graph and choose Properties.

One headache you might encounter with Scientific Workplace though is that when you decide to make a separate title page the footnotes on the title page actually have numerical marks rather than symbols. You won't have this problem if you do not make a separate title page. Here is what to do to make sure that you have numerical footnote marks in the main text while retaining symbols as footnote marks in the separate title page. In the Preamble (i.e., before the line \begin{document}) of the document, add the following line: \renewcommand{\thefootnote}{\fnsymbol{footnote}}. Add a second line \renewcommand{\thefootnote}{\arabic{footnote}} after the title and just before the main text. If you want to use roman numbers as footnote marks in the main text, then replace the second line above with \renewcommand{\thefootnote}{\roman{footnote}}. If you want to use alphabetic numbering as footnote marks in the main text, then replace the second line above with \renewcommand{\thefootnote}{\alph{footnote}}.

On a related note, you may find the necessity to create a double-blind title page for the journal submission of your research paper. Ideally, you want to make the title page standing alone without any page number at the footer. You can enter an encapsulated TeX field on the target page and type \thispagestyle{empty} in the entry field. For more information related to this task, you can check out the knowledgebase articles on how to suppress the page number, how to change the typeset page numbers, and how to change the typeset title page.

Typesetting tables in TeX can be a headache sometimes. Whenever possible, try to set the table format in M$ Word and then use Word2TeX to convert the table. It is much easier to work with the converted table in TeX format thereafter. It does not hurt to know a few TeX commands though. For example, you need to use \multicolumn command (see syntax and example: \multicolumn{2}{c}{text}) to merge multiple columns. You can use \raisebox command (see syntax and example: \raisebox{-1.50ex}[0cm][0cm]{text}) to merge multiple rows in a given column. Use \cline{3-5} to draw a consecutive line below columns 3 through 5, use \hline to draw a line below one entire row, and use | to draw column separators. Also check out the very friendly and detailed paper titled Tables in LaTeX2e: Packages and Methods by Lapo Filippo Mori.

You may want to import the floating table fragment (File >> Import Fragment) from Scientific Workplace first, and then copy and paste the tabular entries from the intermediate file. Just remember to change the formatting of the column entries. For example, you may want to center all eight columns in a table by issuing \begin{tabular}{ccp{1pt}ccp{1pt}cc}. Note that the command p{1pt} refers to an empty column that you can insert to increase the spacing between adjacent blocks of columns. You can also change the table placement accordingly by typing \begin{table}[H] \centering inside the encapsulated TeX field (the beginning portion of the table fragment, or TeX button B). Also remember to add the float package as well as the lscape package in case you have certain tables that are so wide that you have to set them in landscape mode. Insert \begin{landscape} and \end{landscape} around the segments of tables where you need to switch into the landscape mode. You can even issue \newpage statement to break long tables into multiple pages, but the table on each page should have its own fragment and tabular statement. Make sure that your PDF driver will support the lscape package though. For instance, the Adobe Acrobat print driver support this package while TrueTex PDF won't. If you want to edit the table entries from the Style Editor inside Scientific Workplace, then you'd better comment out \begin{landscape} and \end{landscape}. You can always remove the comment sign once you are ready to print the PDF file.

Let's say you put the tables at the end of the paper and want the tables to have different line spacing (say single space) than the rest of the document (say double space). How do you do that? Insert \singlespacing before the tables start. What if you want to use a different font size (say 10 points) then the rest of the document (say 11 points)? You can issue \fontsize{10}{12} \selectfont at the start of the segment of tables. You also need to issue this same statement just before each tabular statement. If you need more low-level control over the font size, consult the font guide page by the LaTeX 3 Project Team.

The landscape has changed quite a bit since my earlier post, so it warrants a bit update. I now endorse the open-source package LyX because it offers a handy TeX editor that happens to be quite capable of tracking changes as well. The land of Microsoft Word has changed consdierably as well. The MS Equation Editor 3 made it possible to enter TeX commands directly. Those of us who want to be a power user should check out the Unicode Technical Note. If you want a free Mathematics Add-In for Word and One-Note, you can download it directly from Microsoft. If you want to add more mathmatical fonts, then you can try a post by Random Walks.

I highly recommend the How C Programming Works article on the famous website HowStuffWorks. If you are serious about programming in C, you probably want to buy the classic The C Programming Language by Kernighan and Ritchie, also known as the K&R book, or Practical C Programming 3rd Edition by Oualline. Here are some noteworthy internet tutorials on programming in C: a set of C programming notes (both the introductory and the intermediate level) by Steve Summit.

You may ask why a finance person would be interested in C programming. My desire comes from the flexibility and the generic nature of the C language. What's equally important to me is the increasingly seamless connection between C and a few important matrix programming languages, such as Matlab and Gauss. As you may know, there are many well-known numerical algorithms written in C that you can readily deploy in your Matlab programs. If you are even remotely interested in numerical methods, you probably should invest some time in learning the C programming and some money in buying the classic Numerical Recipe in C. It is probably not a bad idea to check out the code CD-ROM associated with Numerical Recipe in C from your local library, wherever possible.


© Qin Lei. All Rights Reserved.