Command Reference : Command Reference : wfopen : Examples
  
 
wfopen
Open a workfile. Reads in a previously saved workfile from disk, or reads the contents of a foreign data source into a new workfile.
The opened workfile becomes the default workfile; existing workfiles in memory remain on the desktop but become inactive.
Syntax
wfopen [path\]source_name
wfopen(options) source_description [table_description] [variables_description]
wfopen(options) source_description [table_description] [dataset_modifiers]
where path is an optional local path or URL.
There are three basic forms of the wfopen command:
the first form is used by EViews native ( “EViews and MicroTSP”) and time series database formats ( “Time Series Formats”).
the second form used for raw data files—Excel, Lotus, ASCII text, and binary files ( “Raw Data Formats”).
the third form is used with the remaining source formats, which we term dataset formats, since the data have already been arranged in named variables ( “Datasets”).
(See “Options” for a description of the supported source formats and corresponding types.)
In all three cases, the workfile or external data source should be specified as the first argument following the command keyword and options.
In most cases, the external data source is a file, so the source_description will be the description of the file (including local path or URL information, if necessary). Alternatively, the external data source may be the output from a web server, in which case the URL should be provided. Similarly, when reading from an ODBC query, the ODBC DSN (data source name) should be used as the source_description.
If the source_description contains spaces, it must be enclosed in (double) quotes.
For raw and dataset formats, you may use table_description to provide additional information about the data to be read:
Where there is more than one table that could be formed from the specified external data source, a table_description may be provided to select the desired table. For example, when reading from an Excel file, an optional cell range may be provided to specify which data are to be read from the spreadsheet. When reading from an ODBC data source, a SQL query or table name must be used to specify the table of data to be read.
In raw data formats, the table_description allows you to provide additional information regarding names and descriptions for variables to be read, missing values codes, settings for automatic format, and data range subsetting.
When working with text or binary files, the table_description must be used to describe how to break up the file into columns and rows.
For raw and non-EViews dataset formats, you may use the dataset_modifiers specification to select the set of variables, maps (value labels), and observations to be read from the source data. The dataset_modifiers consists of the following keyword delimited lists:
[@keep keep_list] [@drop drop_list] [@keepmap keepmap_list] [@dropmap dropmap_list] [@selectif condition]
The @keep and @drop keywords, followed by a list of names and patterns, are used to specify variables to be retain or dropped. Similarly, the @keepmap and @dropmap keywords followed by lists of name patterns controls the reading of value labels. The keyword @selectif, followed by an if condition (e.g., “if age>30 and gender=1”) may be used to select a subset of the observations in the original data. By default, all variables, value labels, and observations are read.
By default, all variables, maps and observations in the source file will be read.
Options
 
type=arg, t=arg
Optional type specification: (see table below).
Note that ODBC support is provided only in the EViews Enterprise Edition.
link
Link the object to the source data so that the values can be refreshed at a later time.
wf=wf_name
Optional name for the new workfile.
page=page_name
Optional name for the page in the new workfile.
prompt
Force the dialog to appear from within a program.
For the most part, you should not need to specify a “type=” option as EViews will automatically determine the type from the filename.
The following table summaries the various source formats and types, along with the corresponding “type=” keywords:
 
 
Source Type
Option Keywords
Access
dataset
“access”
Aremos-TSD
time series database
“a”, “aremos”, “tsd”
Binary
raw data
“binary”
dBASE
dataset
“dbase”
Excel (through 2003)
raw data
“excel”
Excel 2007 (xml)
raw data
“excelxml”
EViews Workfile
native
---
Gauss Dataset
dataset
“gauss”
GiveWin/PcGive
time series database
“g”, “give”
HTML
raw data
“html”
Lotus 1-2-3
raw data
“lotus”
ODBC Dsn File
dataset
“dsn”
ODBC Query File
dataset
“msquery”
ODBC Data Source
dataset
“odbc”
MicroTSP Workfile
native
“dos”, “microtsp”
MicroTSP Mac Workfile
native
“mac”
RATS 4.x
time series database
“r”, “rats”
RATS Portable / TROLL
time series database
“l”, “trl”
SAS Program
dataset
“sasprog”
SAS Transport
dataset
“sasxport”
SPSS
dataset
“spss”
SPSS Portable
dataset
“spssport”
Stata
dataset
“stata”
Text / ASCII
raw data
“text”
TSP Portable
time series database
“t”, “tsp”
EViews and MicroTSP
The syntax for EViews and MicroTSP files is:
wfopen [path\]workfile_name
where path is an option local path or URL.
Examples
wfopen c:\data\macro
loads a previously saved EViews workfile “Macro.WF1” from the “datadirectory in the C drive.
wfopen c:\tsp\nipa.wf
loads a MicroTSP workfileNipa.WF”. If you do not use the workfile type option, you should add the extension “.WF” to the workfile name when loading a DOS MicroTSP workfile. An alternative method specifies the type explicitly:
wfopen(type=dos) nipa
The command:
wfopen "<mydropboxdrive>\folder\nipa.wf1"
will open the file from the cloud location MYDROPBOXDRIVE.
Time Series Formats
The syntax for time series format files (Aremos-TSD, GiveWin/PcGive, RATS, RATS Portable/TROLL, TSP Portable) is:
wfopen(options) [path\]source_name
where path is an optional local path or URL.
If the source files contain data of multiple frequencies, the resulting workfile will be of the lowest frequency, and higher frequency data will be converted to this frequency. If you wish to obtain greater control over the workfile creation, import, or frequency conversion processes, we recommend that you open the file using dbopen and use the database tools to create your workfile.
Aremos Example
wfopen dlcs.tsd
wfopen(type=aremos) dlcs.tsd
open the AREMOS-TSD file DLCS.
GiveWin/PcGive Example
wfopen "f:\project\pc give\data\macrodata.in7"
wfopen(type=give) "f:\project\pc give\data\macrodata"
open the PcGive file MACRODATA.
Rats Examples
wfopen macrodata.rat"
wfopen macrodata.trl
read the native RATS 4.x file MACRODATA.RAT and the RATS Portable/TROLL file “Macrodata.TRL”.
TSP Portable Example
wfopen macrodata.tsp
reads the TSP portable file “Macrodata.TSP”.
Raw Data Formats
The command for reading raw data (Excel 97-2003, Excel 2007, HTML, ASCII text, Binary, Lotus 1-2-3) is
wfopen(options) source_description [table_description] [variables_description] [@keep keep_list] [@drop drop_list] [@keepmap keepmap_list] [@dropmap dropmap_list] [@selectif condition]
where the syntax of the table_description and variables_description differs slightly depending on the type of file.
Excel and Lotus Files
The syntax for reading Excel and Lotus files is:
wfopen(options) source_description [table_description] [variables_description]
The following table_description elements may be used when reading Excel and Lotus data:
“range = arg”, where arg is a range of cells to read from the Excel workbook, following the standard Excel format [worksheet!][topleft_cell[:bottomright_cell]].
If the worksheet name contains spaces, it should be placed in single quotes. If the worksheet name is omitted, the cell range is assumed to refer to the currently active sheet. If only a top left cell is provided, a bottom right cell will be chosen automatically to cover the range of non-empty cells adjacent to the specified top left cell. If only a sheet name is provided, the first set of non-empty cells in the top left corner of the chosen worksheet will be selected automatically. As an alternative to specifying an explicit range, a name which has been defined inside the excel workbook to refer to a range or cell may be used to specify the cells to read.
“byrow”, transpose the incoming data. This option allows you to read files where the series are contained in rows (one row per series) rather than columns.
The optional variables_description may be formed using the elements:
“colhead=int”, number of table rows to be treated as column headers.
“namepos = [first|firstatt|last|lastatt|all|none|attonly|discard|custom]”, which row(s) of the column headers should be used to form the column name, and also how to use the rest. The setting “first” (or “last”) refers to the object name being in the first (or last) column header row, and all other rows as the object's description. Similarly, “firstatt” (or “lastatt”) will use the first (or last) row as the name field, but will use all others as a custom attribute. The setting “all” will concatenate all column header fields into the object's name. “none” will concatenate all column header fields into the object's description. “attonly” will save all column header fields into the object's custom attributes. “discard” will skip all header rows altogether, and “custom” will allow you to specify explicitly how to treat each column header row using the “colheadnames=” argument. The default setting is “all” if no “colheadnames=” is specified, otherwise “custom”.
"colheadnames = ("arg1", "arg2")", required when “namepos=custom”. Specifies the name & type of each column header row. “Name” will be mapped to the object name, “Description” to the object's description field, and the rest will be stored as custom object attributes. Any blank name will cause that column header row to be skipped.
“nonames”, the file does not contain a column header (same as “colhead=0”).
“names=("arg1","arg2",…)”, user specified column names, where arg1, arg2, … are names of the first series, the second series, etc. when names are provided, these override any names that would otherwise be formed from the column headers.
“descriptions=("arg1","arg2",…)”, user specified descriptions of the series. If descriptions are provided, these override any descriptions that would otherwise be read from the data.
“types=("arg1","arg2",…)”, user specified data types of the series. If types are provided they will override the types automatically detected by EViews. You may use any of the following format keywords: “a” (character data), “f” (numeric data), “d” (dates), or “w” (EViews automatic detection). Note that the types appear without quotes: e.g., “types=(a,a,a)”.
“na="arg1"”, text used to represent observations that are missing from the file. The text should be enclosed on double quotes.
“scan=[int| all]”, number of rows of the table to scan during automatic format detection (“scan=all” scans the entire file). Note: If a “range=” argument is not specified, then EViews will only scan the first five rows of data to try and determine the data format for each column. Likewise, if the “na=” argument is not specified, EViews will also try to determine possible NA values by looking for repeated values in the same rows. If the first five rows are not enough to correctly determine the data format, use the “scan=” argument to instruct EViews to look at more rows. In addition, you may want to specify a the “na=” value to override any dynamic NA value that EViews may determine on its own.
“firstobs=int”, first observation to be imported from the table of data (default is 1). This option may be used to start reading rows from partway through the table.
“lastobs=int”, last observation to be read from the table of data (default is last observation of the file). This option may be used to read only part of the file, which may be useful for testing.
Excel Examples
wfopen "c:\data files\data.xls"
loads the active sheet of DATA.XLS into a new workfile.
wfopen(page=mypage) "c:\data files\data.xls" range="GDP data" @drop X
reads the data contained in the “GDP data” sheet of “Data.XLS” into the MYPAGE page of a new workfile. The data for the series X is dropped, and the name of the new workfile page is “GDP”.
 
To load the Excel file containing US Macro Quarterly data from Stock and Watson’s Introduction to Econometrics you may use the command:
wfopen http//wps.aw.com/wps/media/objects/3254/3332253/datasets2e/datasets/USMacro_Quarterly.xls
which will load the Excel file directly into EViews from the publisher’s website (as of 08/2009).
HTML Files
The syntax for reading HTML pages is:
wfopen(options) source_description [table_description] [variables_description]
The following table_description elements may be used when reading an HTML file or page:
“table = arg”, where arg specifies which table to read in an HTML file/page containing multiple tables.
When specifying arg, you should remember that tables are named automatically following the pattern “Table01”, “Table02”, “Table03”, etc. If no table name is specified, the largest table found in the file will be chosen by default. Note that the table numbering may include trivial tables that are part of the HTML content of the file, but would not normally be considered as data tables by a person viewing the page.
“skip = int”, where int is the number of rows to discard from the top of the HTML table.
“byrow”, transpose the incoming data. This option allows you to import files where the series are contained in rows (one row per series) rather than columns.
The optional variables_description may be formed using the elements:
“colhead=int”, number of table rows to be treated as column headers.
“namepos = [first|firstatt|last|lastatt|all|none|attonly|discard|custom]”, which row(s) of the column headers should be used to form the column name, and also how to use the rest. The setting “first” (or “last”) refers to the object name being in the first (or last) column header row, and all other rows as the object's description. Similarly, “firstatt” (or “lastatt”) will use the first (or last) row as the name field, but will use all others as a custom attribute. The setting “all” will concatenate all column header fields into the object's name. “none” will concatenate all column header fields into the object's description. “attonly” will save all column header fields into the object's custom attributes. “discard” will skip all header rows altogether, and “custom” will allow you to specify explicitly how to treat each column header row using the “colheadnames=” argument. The default setting is “all” if no “colheadnames=” is specified, otherwise “custom”.
"colheadnames = ("arg1", "arg2")", required when “namepos=custom”. Specifies the name & type of each column header row. “Name” will be mapped to the object name, “Description” to the object's description field, and the rest will be stored as custom object attributes. Any blank name will cause that column header row to be skipped.
“nonames”, the file does not contain a column header (same as “colhead=0”).
“names=("arg1","arg2",…)”, user specified column names, where arg1, arg2, … are names of the first series, the second series, etc. when names are provided, these override any names that would otherwise be formed from the column headers.
“descriptions=("arg1","arg2",…)”, user specified descriptions of the series. If descriptions are provided, these override any descriptions that would otherwise be read from the data.
“types=("arg1","arg2",…)”, user specified data types of the series. If types are provided they will override the types automatically detected by EViews. You may use any of the following format keywords: “a” (character data), “f” (numeric data), “d” (dates), or “w”(EViews automatic detection). Note that the types appear without quotes: e.g., “types=(a,a,a)”.
“na="arg1"”, text used to represent observations that are missing from the file. The text should be enclosed on double quotes.
“scan=[int|all]”, number of rows of the table to scan during automatic format detection (“scan=all” scans the entire file). Note: If a "range=" argument is not specified, then EViews will only scan the first five rows of data to try and determine the data format for each column. Likewise, if the "na=" argument is not specified, EViews will also try to determine possible NA values by looking for repeated values in the same rows. If the first five rows are not enough to correctly determine the data format, use the "scan=" argument to instruct EViews to look at more rows. In addition, you may want to specify a the "na=" value to override any dynamic NA value that EViews may determine on its own.
“firstobs=int”, first observation to be imported from the table of data (default is 1). This option may be used to start reading rows from partway through the table.
“lastobs = int”, last observation to be read from the table of data (default is last observation of the file). This option may be used to read only part of the file, which may be useful for testing.
HTML Examples
wfopen "c:\data.html"
loads into a new workfile the data located on the HTML file “Data.HTML” located on the C:\ drive
wfopen(type=html) "http://www.tradingroom.com.au/apps/mkt/forex.ac" colhead=3, namepos=first
loads into a new workfile the data with the given URL located on the website site “http://www.tradingroom.com.au”. The column header is set to three rows, with the first row used as names for columns, and the remaining two lines used to form the descriptions.
Text and Binary Files
The syntax for reading text or binary files is:
wfopen(options) source_description [table_description] [variables_description]
If a table_description is not provided, EViews will attempt to read the file as a free-format text file. The following table_description elements may be used when reading a text or binary file:
“ftype = [ascii|binary]” specifies whether numbers and dates in the file are stored in a human readable text (ASCII), or machine readable (Binary) form.
“rectype = [crlf|fixed|streamed]” describes the record structure of the file:
“crlf”, each row in the output table is formed using a fixed number of lines from the file (where lines are separated by carriage return/line feed sequences). This is the default setting.
“fixed”, each row in the output table is formed using a fixed number of characters from the file (specified in “reclen= arg”). This setting is typically used for files that contain no line breaks.
“streamed”, each row in the output table is formed by reading a fixed number of fields, skipping across lines if necessary. This option is typically used for files that contain line breaks, but where the line breaks are not relevant to how rows from the data should be formed.
“reclines =int”, number of lines to use in forming each row when “rectype=crlf” (default is 1).
“reclen=int”, number of bytes to use in forming each row when “rectype=fixed”.
“recfields=int”, number of fields to use in forming each row when “rectype=streamed”.
“skip=int”, number of lines (if rectype is “crlf”) or bytes (if rectype is not “crlf”) to discard from the top of the file.
“comment=string“, where string is a double-quoted string, specifies one or more characters to treat as a comment indicator. When a comment indicator is found, everything on the line to the right of where the comment indicator starts is ignored.
“emptylines=[keep|drop]”, specifies whether empty lines should be ignored (“drop”), or treated as valid lines (“keep”) containing missing values. The default is to ignore empty lines.
“tabwidth=int”, specifies the number of characters between tab stops when tabs are being replaced by spaces (default=8). Note that tabs are automatically replaced by spaces whenever they are not being treated as a field delimiter.
“fieldtype=[delim|fixed|streamed|undivided]”, specifies the structure of fields within a record:
“Delim”, fields are separated by one or more delimiter characters
“Fixed”, each field is a fixed number of characters
“Streamed”, fields are read from left to right, with each field starting immediately after the previous field ends.
“Undivided”, read entire record as a single series.
“quotes=[single|double|both|none]”, specifies the character used for quoting fields, where “single” is the apostrophe, “double” is the double quote character, and “both” means that either single or double quotes are allowed (default is “both”). Characters contained within quotes are never treated as delimiters.
“singlequote“, same as “quotes = single”.
“delim=[comma|tab|space|dblspace|white|dblwhite]”, specifies the character(s) to treat as a delimiter. “White” means that either a tab or a space is a valid delimiter. You may also use the abbreviation “d=” in place of “delim=”.
“custom="arg1"”, specifies custom delimiter characters in the double quoted string. Use the character “t” for tab, “s” for space and “a” for any character.
“mult=[on|off]”, to treat multiple delimiters as one. Default value is “on” if “delim” is “space”, “dblspace”, “white”, or “dblwhite”, and “off” otherwise.
“endian = [big|little]”, selects the endianness of numeric fields contained in binary files.
“string = [nullterm|nullpad|spacepad]”, specifies how strings are stored in binary files. If “nullterm”, strings shorter than the field width are terminated with a single zero character. If “nullpad”, strings shorter than the field width are followed by extra zero characters up to the field width. If “spacepad”, strings shorter than the field width are followed by extra space characters up to the field width.
“byrow”, transpose the incoming data. This option allows you to import files where the series are contained in rows (one row per series) rather than columns.
“lastcol”, include implied last column. For lines that end with a delimiter, this option adds an additional column. When importing a CSV file, lines which have the delimiter as the last character (for example: “name, description, date”), EViews normally determines the line to have 3 columns. With the above option, EViews will determine the line to have 4 columns. Note this is not the same as a line containing “name, description, date”. In this case, EViews will always determine the line to have 3 columns regardless if the option is set.
A central component of the table_description element is the format statement. You may specify the data format using the following table descriptors:
Fortran Format:
fformat=([n1]Type[Width][.Precision], [n2]Type[Width][.Precision], ...)
where Type specifies the underlying data type, and may be one of the following,
I - integer
F - fixed precision
E - scientific
A - alphanumeric
X - skip
and n1, n2, ... are the number of times to read using the descriptor (default=1). More complicated Fortran compatible variations on this format are possible.
Column Range Format:
rformat="[n1]Type[Width][.Precision], [n2]Type[Width][.Precision], ...)"
where optional type is “$” for string or “#” for number, and n1, n2, n3, n4, etc. are the range of columns containing the data.
C printf/scanf Format:
cformat="fmt"
where fmt follows standard C language (printf/scanf) format rules.
The optional variables_description may be formed using the elements:
“colhead=int”, number of table rows to be treated as column headers.
“namepos = [first|firstatt|last|lastatt|all|none|attonly|discard|custom]”, which row(s) of the column headers should be used to form the column name, and also how to use the rest. The setting “first” (or “last”) refers to the object name being in the first (or last) column header row, and all other rows as the object's description. Similarly, “firstatt” (or “lastatt”) will use the first (or last) row as the name field, but will use all others as a custom attribute. The setting “all” will concatenate all column header fields into the object's name. “none” will concatenate all column header fields into the object's description. “attonly” will save all column header fields into the object's custom attributes. “discard” will skip all header rows altogether, and “custom” will allow you to specify explicitly how to treat each column header row using the “colheadnames=” argument. The default setting is “all” if no “colheadnames=” is specified, otherwise “custom”.
"colheadnames = ("arg1", "arg2")", required when “namepos=custom”. Specifies the name & type of each column header row. “Name” will be mapped to the object name, “Description” to the object's description field, and the rest will be stored as custom object attributes. Any blank name will cause that column header row to be skipped.
“nonames”, the file does not contain a column header (same as “colhead=0”).
“names=("arg1", "arg2",…)”, user specified column names, where arg1, arg2, … are names of the first series, the second series, etc. when names are provided, these override any names that would otherwise be formed from the column headers.
“descriptions=("arg1", "arg2",…)”, user specified descriptions of the series. If descriptions are provided, these override any descriptions that would otherwise be read from the data.
“types=("arg1","arg2",…)”, user specified data types of the series. If types are provided they will override the types automatically detected by EViews. You may use any of the following format keywords: “a” (character data), “f” (numeric data), “d” (dates), or “w” (EViews automatic detection). Note that the types appear without quotes: e.g., “types=(a,a,a)”.
“na="arg1"”, text used to represent observations that are missing from the file. The text should be enclosed on double quotes.
“scan=[int|all]”, number of rows of the table to scan during automatic format detection (“scan=all” scans the entire file). Note: If a “range=” argument is not specified, then EViews will only scan the first five rows of data to try and determine the data format for each column. Likewise, if the “na=” argument is not specified, EViews will also try to determine possible NA values by looking for repeated values in the same rows. If the first five rows are not enough to correctly determine the data format, use the “scan=” argument to instruct EViews to look at more rows. In addition, you may want to specify a the “na=” value to override any dynamic NA value that EViews may determine on its own.
“firstobs=int”, first observation to be imported from the table of data (default is 1). This option may be used to start reading rows from partway through the table.
“lastobs = int”, last observation to be read from the table of data (default is last observation of the file). This option may be used to read only part of the file, which may be useful for testing.
Text and Binary File Examples (.txt, .csv, etc.)
wfopen c:\data.csv skip=5, names=(gdp, inv, cons)
reads “Data.CSV” into a new workfile page, skipping the first 5 rows and naming the series GDP, INV, and CONS.
wfopen(type=text) c:\date.txt delim=comma
loads the comma delimited data DATE.TXT into a new workfile.
wfopen(type=raw) c:\data.txt skip=8, rectype=fixed, format=(F10,X23,A4)
loads a text file with fixed length data into a new workfile, skipping the first 8 rows. The reading is done as follows: read the first 10 characters as a fixed precision number, after that, skip the next 23 characters (X23), and then read the next 4 characters as strings (A4).
wfopen(type=raw) c:\data.txt rectype=fixed, format=2(4F8,2I2)
loads the text file as a workfile using the specified explicit format. The data will be a repeat of four fixed precision numbers of length 8 and two integers of length 2. This is the same description as “format=(F8,F8,F8,F8,I2,I2,F8,F8,F8,F8,I2,I2)”.
wfopen(type=raw) c:\data.txt rectype=fixed, rformat="GDP 1-2 INV 3 CONS 6-9"
loads the text file as a workfile using column range syntax. The reading is done as follows: the first series is located at the first and second character of each row, the second series occupies the 3rd character, the third series is located at character 6 through 9. The series will named GDP, INV, and CONS.
Datasets
The syntax for reading data from the remaining sources (Access, Gauss, ODBC, SAS program, SAS transport, SPSS, SPSS portable, Stata) is:
wfopen(options) source_description table_description [@keep keep_list] [@drop drop_list] [@selectif condition]
Note that for this purpose we view Access and ODBC as datasets.
ODBC or Microsoft Access
The syntax for reading from an ODBC or Microsoft Access data source is
wfopen(options) source_description table_description [@keep keep_list] [@drop drop_list] [@selectif condition]
When reading from an ODBC or Microsoft Access data source, you must provide a table_description to indicate the table of data to be read. You may provide this information in one of two ways: by entering the name of a table in the data source, or by including an SQL query statement enclosed in double quotes.
Note that ODBC support is provided only in the EViews Enterprise Edition.
ODBC Examples
wfopen c:\data.dsn CustomerTable
opens in a new workfile the table named CUSTOMERTABLE from the ODBC database described in the DATA.DSN file.
wfopen(type=odbc) "my server" "select * from customers where id>30" @keep p*
opens in a new workfile with SQL query from database using the server “MY SERVER”, keeping only variables that begin with P. The query selects all variables from the table CUSTOMERS where the ID variable takes a value greater than 30.
Other Dataset Types
The syntax for reading data from the remaining sources (Gauss, SAS program, SAS transport, SPSS, SPSS portable, Stata) is:
wfopen(options) source_description [@keep keep_list] [@drop drop_list] [@selectif condition]
Note that no table_description is required.
SAS Program Example
If a data file, “Sales.DAT”, contains the following space delimited data:
AZ 110 1002
CA 200 2003
NM 90 908
OR 120 708
WA 113 1123
UT 98 987
then the following SAS program file can be read by EViews to open the data:
Data sales;
infile sales.dat';
input state $ price sales;
run;
SAS Transport Examples
wfopen(type=sasxport) c:\data.xpt
loads a SAS transport file “data.XPT” into a new workfile.
wfopen c:\inst.sas
creates a workfile by reading from external data using the SAS program statements in “Inst.SAS”. The program may contain a limited set of SAS statements which are commonly used in reading in a data file.
Stata Examples
To load a Stata file “Data.DTA” into a new workfile, dropping maps MAP1 and MAP2, you may enter:
wfopen c:\data.dta @dropmap map1 map2
To download the sports cards dataset from Stock and Watson’s Introduction to Econometrics you may use the command:
wfopen http://wps.aw.com/wps/media/objects/3254/3332253/datasets2e/datasets/Sportscards.dta
which will load the Stata dataset directly into EViews from the publisher’s website (as of 08/2009).
Cross-references
See “Workfile Basics” for a discussion of workfiles.
See also pageload, read, fetch, wfsave, wfclose and pagesave.