by Lisa Tauxe, Lori Jonestrask, Nick Swanson-Hysell and Nick Jarboe
This notebook demonstrates the use of PmagPy functions from within a Jupyter notebook. More information is available in the PmagPy cookbook http://earthref.org/PmagPy/cookbook. For examples of how to use PmagPy scripts on the command line, see the PmagPy-cli notebook.
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import pmagpy.contribution_builder as cb
from pmagpy import convert_2_magic as convert
import matplotlib.pyplot as plt # our plotting buddy
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
# test if Basemap and/or cartopy is installed
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
# test if xlwt is installed (allows you to export to excel)
try:
import xlwt
has_xlwt = True
except ImportError:
has_xlwt = False
# This allows you to make matplotlib plots inside the notebook.
%matplotlib inline
from IPython.display import Image
import os
print('All modules imported!')
All modules imported!
the functions in this notebook are listed alphabetically so here is a handy guide by function:
Calculations:
Plots:
Maps:
Working with MagIC:
reading MagIC files : reading in MagIC formatted files
writing MagIC files : outputing MagIC formatted files
combine_magic : combines two MagIC formatted files of same type
convert_ages : convert ages in downloaded MagIC file to Ma
grab_magic_key : prints out a single column from a MagIC format file
magic_select : selects data from MagIC format file given conditions (e.g., method_codes contain string)
sites_extract : makes excel or latex files from sites.txt for publications
criteria_extract : makes excel or latex files from criteria.txt for publications
specimens_extract : makes excel or latex files from specimens.txt for publications
contributions work with data model 3.0 MagIC contributions
conversion scripts : convert many laboratory measurement formats to the MagIC data model 3 format
other handy scripts
%matplotlib inline
magic command), but all matplotlib plots can be saved with the command:plt.savefig('PATH_TO_FILE_NAME.FMT')
and then viewed in the notebook with:
Image('PATH_TO_FILE_NAME.FMT')
See the first few lines of this sample file below:
with open('data_files/3_0/McMurdo/samples.txt') as f:
for line in f.readlines()[:3]:
print(line, end="")
tab samples azimuth azimuth_dec_correction citations description dip geologic_classes geologic_types lat lithologies lon method_codes orientation_quality sample site 260 0 This study Archived samples from 1965, 66 expeditions. -57 Extrusive:Igneous Lava Flow -77.85 Trachyte 166.64 SO-SIGHT:FS-FD g mc01a mc01
MagIC formatted data files can be imported to a notebook in one of two ways: a
In this notebook, we generally read MagIC tables into a Pandas Dataframe with a command like:
meas_df = pd.read_csv('MEASUREMENTS_FILE_PATH',sep='\t',header=1)
These data can then be manipulated with Pandas functions (https://pandas.pydata.org/)
meas_df=pd.read_csv('data_files/3_0/McMurdo/measurements.txt',sep='\t',header=1)
meas_df.head()
experiment | specimen | measurement | dir_csd | dir_dec | dir_inc | hyst_charging_mode | hyst_loop | hyst_sweep_rate | treat_ac_field | ... | timestamp | magn_r2_det | magn_x_sigma | magn_xyz_sigma | magn_y_sigma | magn_z_sigma | susc_chi_mass | susc_chi_qdr_mass | susc_chi_qdr_volume | susc_chi_volume | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | mc01f-LP-DIR-AF | mc01f | mc01f-LP-DIR-AF1 | 0.4 | 171.9 | 31.8 | NaN | NaN | NaN | 0.0000 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
1 | mc01f-LP-DIR-AF | mc01f | mc01f-LP-DIR-AF2 | 0.4 | 172.0 | 30.1 | NaN | NaN | NaN | 0.0050 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
2 | mc01f-LP-DIR-AF | mc01f | mc01f-LP-DIR-AF3 | 0.5 | 172.3 | 30.4 | NaN | NaN | NaN | 0.0075 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
3 | mc01f-LP-DIR-AF | mc01f | mc01f-LP-DIR-AF4 | 0.4 | 172.1 | 30.4 | NaN | NaN | NaN | 0.0100 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
4 | mc01f-LP-DIR-AF | mc01f | mc01f-LP-DIR-AF5 | 0.5 | 171.9 | 30.8 | NaN | NaN | NaN | 0.0125 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
5 rows × 60 columns
Alternatively, the user may wish to use a list of dictionaries compatible with many pmag functions. For that, use the pmag.magic_read() function:
help (pmag.magic_read)
Help on function magic_read in module pmagpy.pmag: magic_read(infile, data=None, return_keys=False, verbose=False) Reads a Magic template file, returns data in a list of dictionaries. Parameters ___________ Required: infile : the MagIC formatted tab delimited data file first line contains 'tab' in the first column and the data file type in the second (e.g., measurements, specimen, sample, etc.) Optional: data : data read in with, e.g., file.readlines() Returns _______ list of dictionaries, file type
meas_dict,file_type=pmag.magic_read('data_files/3_0/McMurdo/measurements.txt')
print (file_type)
print (meas_dict[0])
measurements {'experiment': 'mc01f-LP-DIR-AF', 'specimen': 'mc01f', 'measurement': 'mc01f-LP-DIR-AF1', 'dir_csd': '0.4', 'dir_dec': '171.9', 'dir_inc': '31.8', 'hyst_charging_mode': '', 'hyst_loop': '', 'hyst_sweep_rate': '', 'treat_ac_field': '0.0', 'treat_ac_field_dc_off': '', 'treat_ac_field_dc_on': '', 'treat_ac_field_decay_rate': '', 'treat_dc_field': '0.0', 'treat_dc_field_ac_off': '', 'treat_dc_field_ac_on': '', 'treat_dc_field_decay_rate': '', 'treat_dc_field_phi': '0.0', 'treat_dc_field_theta': '0.0', 'treat_mw_energy': '', 'treat_mw_integral': '', 'treat_mw_power': '', 'treat_mw_time': '', 'treat_step_num': '1', 'treat_temp': '273', 'treat_temp_dc_off': '', 'treat_temp_dc_on': '', 'treat_temp_decay_rate': '', 'magn_mass': '', 'magn_moment': '2.7699999999999996e-05', 'magn_volume': '', 'citations': 'This study', 'instrument_codes': '', 'method_codes': 'LT-NO:LP-DIR-AF', 'quality': 'g', 'standard': 'u', 'meas_field_ac': '', 'meas_field_dc': '', 'meas_freq': '', 'meas_n_orient': '', 'meas_orient_phi': '', 'meas_orient_theta': '', 'meas_pos_x': '', 'meas_pos_y': '', 'meas_pos_z': '', 'meas_temp': '273', 'meas_temp_change': '', 'analysts': 'Jason Steindorf', 'description': '', 'software_packages': 'pmagpy-1.65b', 'timestamp': '', 'magn_r2_det': '', 'magn_x_sigma': '', 'magn_xyz_sigma': '', 'magn_y_sigma': '', 'magn_z_sigma': '', 'susc_chi_mass': '', 'susc_chi_qdr_mass': '', 'susc_chi_qdr_volume': '', 'susc_chi_volume': ''}
To write out a MagIC table from a Pandas DataFrame, first convert it to a list of dictionaries using a command like:
dicts = df.to_dict('records')
then call pmag.magic_write().
From a list of dictionaries, you can just call pmag.magic_write() directly.
help(pmag.magic_write)
Help on function magic_write in module pmagpy.pmag: magic_write(ofile, Recs, file_type) Parameters _________ ofile : path to output file Recs : list of dictionaries in MagIC format file_type : MagIC table type (e.g., specimens) Return : [True,False] : True if successful ofile : same as input Effects : writes a MagIC formatted file from Recs
meas_dicts = meas_df.to_dict('records')
pmag.magic_write('my_measurements.txt', meas_dicts, 'measurements')
25470 records written to file my_measurements.txt
(True, 'my_measurements.txt')
[MagIC Database] [command line version]
MagIC tables have many columns only some of which are used in a particular instance. So combining files of the same type must be done carefully to ensure that the right data come under the right headings. The program combine_magic can be used to combine any number of MagIC files from a given type.
It reads in MagIC formatted files of a common type (e.g., sites.txt) and combines them into a single file, taking care that all the columns are preserved. For example, if there are both AF and thermal data from a study and we created a measurements.txt formatted file for each, we could use combine_magic.py on the command line to combine them together into a single measurements.txt file. In a notebook, we use ipmag.combine_magic().
help(ipmag.combine_magic)
Help on function combine_magic in module pmagpy.ipmag: combine_magic(filenames, outfile, data_model=3, magic_table='measurements', output_dir_path='.', input_dir_path='') Takes a list of magic-formatted files, concatenates them, and creates a single file. Returns output filename if the operation was successful. Parameters ----------- filenames : list of MagIC formatted files outfile : name of output file data_model : data model number (2.5 or 3), default 3 magic_table : name of magic table, default 'measurements' Returns ---------- outfile name if success, False if failure
Here we make a list of names of two MagIC formatted measurements.txt files and use ipmag.combine_magic() to put them together.
filenames=['data_files/combine_magic/af_measurements.txt','../combine_magic/therm_measurements.txt']
outfile='data_files/combine_magic/measurements.txt'
ipmag.combine_magic(filenames,outfile)
-I- Using cached data model -I- Couldn't connect to earthref.org, using cached method codes -I- Using cached method codes -I- Using cached vocabularies -I- Using cached suggested vocabularies -I- overwriting /Users/nebula/Python/PmagPy/data_files/combine_magic/measurements.txt -I- 14 records written to measurements file
'/Users/nebula/Python/PmagPy/data_files/combine_magic/measurements.txt'
Files downloaded from the MagIC search interface have ages that are in the original units, but what is often desired is for them to be in a single unit. For example, if we searched the MagIC database for all absolute paleointensity data (records with method codes of 'LP-PI-TRM') from the last five million years, the data sets have a variety of age units. We can use pmag.convert_ages() to convert them all to millions of years.
First we follow the instructions for unpacking downloaded files in download_magic.
ipmag.download_magic('magic_downloaded_rows.txt',dir_path='data_files/convert_ages/',
input_dir_path='data_files/convert_ages/')
working on: 'contribution' 1 records written to file /Users/nebula/Python/PmagPy/data_files/convert_ages/contribution.txt contribution data put in /Users/nebula/Python/PmagPy/data_files/convert_ages/contribution.txt working on: 'sites' 14317 records written to file /Users/nebula/Python/PmagPy/data_files/convert_ages/sites.txt sites data put in /Users/nebula/Python/PmagPy/data_files/convert_ages/sites.txt
True
After some minimal filtering using Pandas, we can convert a DataFrame to a list of dictionaries required by most PmagPy functions and use pmag.convert_ages() to convert all the ages. The converted list of dictionaries can then be turned back into a Pandas DataFrame and either plotted or filtered further as desired.
In this example, we filter for data older than the Brunhes (0.78 Ma) and younger than 5 Ma, then plot them against latitude. We can also use vdm_b to plot the intensities expected from the present dipole moment (~80 ZAm$^2$).
help(pmag.convert_ages)
Help on function convert_ages in module pmagpy.pmag: convert_ages(Recs, data_model=3) converts ages to Ma Parameters _________ Recs : list of dictionaries in data model by data_model data_model : MagIC data model (default is 3)
# read in the sites.txt file as a dataframe
site_df=pd.read_csv('data_files/convert_ages/sites.txt',sep='\t',header=1)
# get rid aof any records without intensity data or latitude
site_df=site_df.dropna(subset=['int_abs','lat'])
# Pick out the sites with 'age' filled in
site_df_age=site_df.dropna(subset=['age'])
# pick out those with age_low and age_high filled in
site_df_lowhigh=site_df.dropna(subset=['age_low','age_high'])
# concatenate the two
site_all_ages=pd.concat([site_df_age,site_df_lowhigh])
# get rid of duplicates (records with age, age_high AND age_low)
site_all_ages.drop_duplicates(inplace=True)
# Pandas reads in blanks as NaN, which pmag.convert_ages hates
# this replaces all the NaNs with blanks
site_all_ages.fillna('',inplace=True)
# converts to a list of dictionaries
sites=site_all_ages.to_dict('records')
# converts the ages to Ma
converted_df=pmag.convert_ages(sites)
# turn it back into a DataFrame
site_ages=pd.DataFrame(converted_df)
# filter away
site_ages=site_ages[site_ages.age.astype(float) <= 5]
site_ages=site_ages[site_ages.age.astype(float) >=0.05]
Let's plot them up and see what we get.
plt.plot(site_ages.lat,site_ages.int_abs*1e6,'bo')
# put on the expected values for the present dipole moment (~80 ZAm^2)
lats=np.arange(-80,70,1)
vdms=80e21*np.ones(len(lats))
bs=pmag.vdm_b(vdms,lats)*1e6
plt.plot(lats,bs,'r-')
plt.xlabel('Latitude')
plt.ylabel('Intensity ($\mu$T)')
Text(0,0.5,'Intensity ($\\mu$T)')
That is pretty awful agreement. Someday we need to figure out what is wrong with the data or our GAD hypothesis.
[MagIC Database] [command line version]
Sometimes you want to read in a MagIC file and print out the desired key. Pandas makes this easy! In this example, we will print out latitudes for each site record.
sites=pd.read_csv('data_files/download_magic/sites.txt',sep='\t',header=1)
print (sites.lat)
0 42.60264 1 42.60264 2 42.60260 3 42.60352 4 42.60350 5 42.60104 6 42.60100 7 42.73656 8 42.73660 9 42.84180 10 42.84180 11 42.86570 12 42.86570 13 42.92031 14 42.92030 15 42.56857 16 42.49964 17 42.49962 18 42.49960 19 42.50001 20 42.50000 21 42.52872 22 42.52870 23 42.45559 24 42.45560 25 42.48923 26 42.48920 27 42.46186 28 42.46190 29 42.69156 30 42.65289 31 42.65290 32 43.30504 33 43.30500 34 43.36817 35 43.36817 36 43.36820 37 43.42133 38 43.42130 39 43.88590 40 43.88590 41 43.88590 42 43.84273 43 43.84270 44 43.53289 45 43.57494 46 43.57494 47 43.57490 48 44.15663 49 44.15660 50 44.18629 51 42.60260 Name: lat, dtype: float64
[MagIC Database] [command line version]
This example demonstrates how to select MagIC records that meet a certain criterion, like having a particular method code.
Note: to output into a MagIC formatted file, we can change the DataFrame to a list of dictionaries (with df.to_dict("records")) and use pmag.magic_write()..
help(pmag.magic_write)
Help on function magic_write in module pmagpy.pmag: magic_write(ofile, Recs, file_type) Parameters _________ ofile : path to output file Recs : list of dictionaries in MagIC format file_type : MagIC table type (e.g., specimens) Return : [True,False] : True if successful ofile : same as input Effects : writes a MagIC formatted file from Recs
# read in the data file
spec_df=pd.read_csv('data_files/magic_select/specimens.txt',sep='\t',header=1)
# pick out the desired data
method_key='method_codes' # change to method_codes for data model 3
spec_df=spec_df[spec_df.method_codes.str.contains('LP-DIR-AF')]
specs=spec_df.to_dict('records') # export to list of dictionaries
success,ofile=pmag.magic_write('data_files/magic_select/AF_specimens.txt',specs,'pmag_specimens') # specimens for data model 3.0
76 records written to file data_files/magic_select/AF_specimens.txt
It is frequently desirable to format tables for publications from the MagIC formatted files. This example is for the sites.txt formatted file. It will create a site information table with the location and age information, and directions and/or intenisty summary tables. The function to call is ipmag.sites_extract().
help(ipmag.sites_extract)
Help on function sites_extract in module pmagpy.ipmag: sites_extract(site_file='sites.txt', directions_file='directions.xls', intensity_file='intensity.xls', info_file='site_info.xls', output_dir_path='.', input_dir_path='', latex=False) Extracts directional and/or intensity data from a MagIC 3.0 format sites.txt file. Default output format is an Excel file. Optional latex format longtable file which can be uploaded to Overleaf or typeset with latex on your own computer. Parameters ___________ site_file : str input file name directions_file : str output file name for directional data intensity_file : str output file name for intensity data site_info : str output file name for site information (lat, lon, location, age....) output_dir_path : str path for output files input_dir_path : str path for intput file if different from output_dir_path (default is same) latex : boolean if True, output file should be latex formatted table with a .tex ending Return : [True,False], error type : True if successful Effects : writes Excel or LaTeX formatted tables for use in publications
Here is an example for how to create Latex files:
#latex way:
ipmag.sites_extract(directions_file='directions.tex',intensity_file='intensities.tex',
output_dir_path='data_files/3_0/McMurdo',info_file='site_info.tex',latex=True)
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/site_info.tex', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/intensities.tex', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/directions.tex'])
And here is how to create Excel files:
#xls way:
if has_xlwt:
print(ipmag.sites_extract(output_dir_path='data_files/3_0/McMurdo'))
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/site_info.xls', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/intensity.xls', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/directions.xls'])
This example is for the criteria.txt formatted file. It will create a criteria table suitable for publication in either LaTex or .csv format. The function to call is ipmag.criteria_extract().
help(ipmag.criteria_extract)
Help on function criteria_extract in module pmagpy.ipmag: criteria_extract(crit_file='criteria.txt', output_file='criteria.xls', output_dir_path='.', input_dir_path='', latex=False) Extracts criteria from a MagIC 3.0 format criteria.txt file. Default output format is an Excel file. typeset with latex on your own computer. Parameters ___________ crit_file : str, default "criteria.txt" input file name output_file : str, default "criteria.xls" output file name output_dir_path : str, default "." output file directory input_dir_path : str, default "" path for intput file if different from output_dir_path (default is same) latex : boolean, default False if True, output file should be latex formatted table with a .tex ending Return : [True,False], data table error type : True if successful Effects : writes xls or latex formatted tables for use in publications
# latex way:
ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo',
latex=True,output_file='criteria.tex',)
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/criteria.tex'])
#xls way:
if has_xlwt:
print(ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo'))
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/criteria.xls'])
Similarly, it is useful to make tables for specimen (intensity) data to include in publications. Here are examples using a specimens.txt file.
help(ipmag.specimens_extract)
Help on function specimens_extract in module pmagpy.ipmag: specimens_extract(spec_file='specimens.txt', output_file='specimens.xls', landscape=False, longtable=False, output_dir_path='.', input_dir_path='', latex=False) Extracts specimen results from a MagIC 3.0 format specimens.txt file. Default output format is an Excel file. typeset with latex on your own computer. Parameters ___________ spec_file : str, default "specimens.txt" input file name output_file : str, default "specimens.xls" output file name landscape : boolean, default False if True output latex landscape table longtable : boolean if True output latex longtable output_dir_path : str, default "." output file directory input_dir_path : str, default "" path for intput file if different from output_dir_path (default is same) latex : boolean, default False if True, output file should be latex formatted table with a .tex ending Return : [True,False], data table error type : True if successful Effects : writes xls or latex formatted tables for use in publications
#latex way:
ipmag.specimens_extract(output_file='specimens.tex',landscape=True,
output_dir_path='data_files/3_0/Megiddo',latex=True,longtable=True)
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/specimens.tex'])
#xls way:
if has_xlwt:
print(ipmag.specimens_extract(output_dir_path='data_files/3_0/Megiddo'))
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/specimens.xls'])
Here are some useful functions for working with MagIC data model 3.0 contributions.
[MagIC Database] [command line version]
This program unpacks the .txt files downloaded from the MagIC database into individual text files. It has an option to also separate files for each location
As an example, go to the MagIC data base at http://earthref.org/MAGIC/doi/10.1029/2003gc000661 and dowload the contribution. Make a folder into which you should put the downloaded txt file called MagIC_download and move the file into it. Now use the program download_magic to unpack the .txt file (magic_contribution_16533.txt).
To do this within a notebook, use the function ipmag.download_magic().
help(ipmag.download_magic)
Help on function download_magic in module pmagpy.ipmag: download_magic(infile, dir_path='.', input_dir_path='', overwrite=False, print_progress=True, data_model=3.0, separate_locs=False) takes the name of a text file downloaded from the MagIC database and unpacks it into magic-formatted files. by default, download_magic assumes that you are doing everything in your current directory. if not, you may provide optional arguments dir_path (where you want the results to go) and input_dir_path (where the downloaded file is IF that location is different from dir_path). Parameters ---------- infile : str MagIC-format file to unpack dir_path : str output directory (default ".") input_dir_path : str, default "" path for intput file if different from output_dir_path (default is same) overwrite: bool overwrite current directory (default False) print_progress: bool verbose output (default True) data_model : float MagIC data model 2.5 or 3 (default 3) separate_locs : bool create a separate directory for each location (Location_*) (default False)
ipmag.download_magic(infile='magic_contribution_16533.txt',\
input_dir_path='data_files/download_magic',dir_path='data_files/download_magic')
working on: 'contribution' 1 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/contribution.txt contribution data put in /Users/nebula/Python/PmagPy/data_files/download_magic/contribution.txt working on: 'locations' 3 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/locations.txt locations data put in /Users/nebula/Python/PmagPy/data_files/download_magic/locations.txt working on: 'sites' 52 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/sites.txt sites data put in /Users/nebula/Python/PmagPy/data_files/download_magic/sites.txt working on: 'samples' 271 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/samples.txt samples data put in /Users/nebula/Python/PmagPy/data_files/download_magic/samples.txt working on: 'specimens' 225 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/specimens.txt specimens data put in /Users/nebula/Python/PmagPy/data_files/download_magic/specimens.txt working on: 'measurements' 3072 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/measurements.txt measurements data put in /Users/nebula/Python/PmagPy/data_files/download_magic/measurements.txt working on: 'criteria' 20 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/criteria.txt criteria data put in /Users/nebula/Python/PmagPy/data_files/download_magic/criteria.txt working on: 'ages' 20 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/ages.txt ages data put in /Users/nebula/Python/PmagPy/data_files/download_magic/ages.txt
True
You could look at these data with dmag_magic for example...
[MagIC Database] [command line version]
We can just turn around and try to upload the file downloaded in download_magic. For this we use ipmag.upload_magic() in the same directory as for the download. You can try to upload the file you create to the MagIC data base as a private contribution here: https://www2.earthref.org/MagIC/upload
help(ipmag.upload_magic)
Help on function upload_magic in module pmagpy.ipmag: upload_magic(concat=False, dir_path='.', dmodel=None, vocab='', contribution=None, input_dir_path='') Finds all magic files in a given directory, and compiles them into an upload.txt file which can be uploaded into the MagIC database. Parameters ---------- concat : boolean where True means do concatenate to upload.txt file in dir_path, False means write a new file (default is False) dir_path : string for input/output directory (default ".") dmodel : pmagpy data_model.DataModel object, if not provided will be created (default None) vocab : pmagpy controlled_vocabularies3.Vocabulary object, if not provided will be created (default None) contribution : pmagpy contribution_builder.Contribution object, if not provided will be created in directory (default None) input_dir_path : str, default "" path for intput files if different from output dir_path (default is same) Returns ---------- tuple of either: (False, error_message, errors, all_failing_items) if there was a problem creating/validating the upload file or: (filename, '', None, None) if the file creation was fully successful.
ipmag.upload_magic(dir_path='data_files/download_magic',concat=True)
-I- Removing old error files from /Users/nebula/Python/PmagPy/data_files/download_magic: locations_errors.txt, samples_errors.txt, specimens_errors.txt, sites_errors.txt, ages_errors.txt, measurements_errors.txt, criteria_errors.txt, contribution_errors.txt, images_errors.txt -W- Column 'core_depth' isn't in samples table, skipping it -W- Column 'composite_depth' isn't in samples table, skipping it -W- Invalid or missing column names, could not propagate columns -I- ages file successfully read in -I- Validating ages -I- No row errors found! -I- appending ages data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 20 records written to ages file -I- ages written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- contribution file successfully read in -I- dropping these columns: version from the contribution table -I- Validating contribution -W- these rows have problems: row: 0, name: 0 -W- these columns contain bad values: data_model_version -I- Complete list of row errors can be found in /Users/nebula/Python/PmagPy/data_files/download_magic/contribution_errors.txt -I- appending contribution data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 1 records written to contribution file -I- contribution written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- criteria file successfully read in -I- Validating criteria -I- No row errors found! -I- appending criteria data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 20 records written to criteria file -I- criteria written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- locations file successfully read in -I- Validating locations -I- No row errors found! -I- appending locations data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 3 records written to locations file -I- locations written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- measurements file successfully read in -I- Validating measurements -I- No row errors found! -I- appending measurements data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 3072 records written to measurements file -I- measurements written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- samples file successfully read in -I- Validating samples -I- No row errors found! -I- appending samples data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 271 records written to samples file -I- samples written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- sites file successfully read in -I- Validating sites -I- No row errors found! -I- appending sites data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 52 records written to sites file -I- sites written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- specimens file successfully read in -I- Validating specimens -I- No row errors found! -I- appending specimens data to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt -I- 225 records written to specimens file -I- specimens written to /Users/nebula/Python/PmagPy/data_files/download_magic/upload.txt Finished preparing upload file: /Users/nebula/Python/PmagPy/data_files/download_magic/Snake-River_13.Mar.2019.txt -W- These tables have errors: contribution -W- validation of upload file has failed. You can still upload /Users/nebula/Python/PmagPy/data_files/download_magic/Snake-River_13.Mar.2019.txt to MagIC, but you will need to fix the above errors before your contribution can be activated.
(False, 'Validation of your upload file has failed.\nYou can still upload /Users/nebula/Python/PmagPy/data_files/download_magic/Snake-River_13.Mar.2019.txt to MagIC,\nbut you will need to fix the above errors before your contribution can be activated.', ['contribution'], {'contribution': {'rows': num value_pass_data_model_version_cv \ 0 0 "3" is not in controlled vocabulary for magic_... issues 0 {'value_pass_data_model_version_cv': '"3" is n... , 'missing_columns': [], 'missing_groups': Index([], dtype='object')}})
If this were your own study, you could now go to https://earthref.org/MagIC and upload your contribution to a Private Workspace, validate, assign a DOI and activate!
MagIC data model 3 took out redundant columns in the MagIC tables so the hierarchy of specimens (in the measurements and specimens tables) up to samples, sites and locations is lost. To put these back into the measurement table, we have the function cb.add_sites_to_meas_table(), which is super handy when data analysis requires it.
help(cb.add_sites_to_meas_table)
Help on function add_sites_to_meas_table in module pmagpy.contribution_builder: add_sites_to_meas_table(dir_path) Add site columns to measurements table (e.g., to plot intensity data), or generate an informative error message. Parameters ---------- dir_path : str directory with data files Returns ---------- status : bool True if successful, else False data : pandas DataFrame measurement data with site/sample
status,meas_df=cb.add_sites_to_meas_table('data_files/3_0/McMurdo')
meas_df.columns
Index(['experiment', 'specimen', 'measurement', 'dir_csd', 'dir_dec', 'dir_inc', 'hyst_charging_mode', 'hyst_loop', 'hyst_sweep_rate', 'treat_ac_field', 'treat_ac_field_dc_off', 'treat_ac_field_dc_on', 'treat_ac_field_decay_rate', 'treat_dc_field', 'treat_dc_field_ac_off', 'treat_dc_field_ac_on', 'treat_dc_field_decay_rate', 'treat_dc_field_phi', 'treat_dc_field_theta', 'treat_mw_energy', 'treat_mw_integral', 'treat_mw_power', 'treat_mw_time', 'treat_step_num', 'treat_temp', 'treat_temp_dc_off', 'treat_temp_dc_on', 'treat_temp_decay_rate', 'magn_mass', 'magn_moment', 'magn_volume', 'citations', 'instrument_codes', 'method_codes', 'quality', 'standard', 'meas_field_ac', 'meas_field_dc', 'meas_freq', 'meas_n_orient', 'meas_orient_phi', 'meas_orient_theta', 'meas_pos_x', 'meas_pos_y', 'meas_pos_z', 'meas_temp', 'meas_temp_change', 'analysts', 'description', 'software_packages', 'timestamp', 'magn_r2_det', 'magn_x_sigma', 'magn_xyz_sigma', 'magn_y_sigma', 'magn_z_sigma', 'susc_chi_mass', 'susc_chi_qdr_mass', 'susc_chi_qdr_volume', 'susc_chi_volume', 'sequence', 'sample', 'site'], dtype='object')
The MagIC data model has several different forms of magnetization with different normalizations (moment, volume, or mass). So to find the one used in a particular measurements table we can use this handy function.
help(cb.get_intensity_col)
Help on function get_intensity_col in module pmagpy.contribution_builder: get_intensity_col(data) Check measurement dataframe for intensity columns 'magn_moment', 'magn_volume', 'magn_mass','magn_uncal'. Return the first intensity column that is in the dataframe AND has data. Parameters ---------- data : pandas DataFrame Returns --------- str intensity method column or ""
magn_col=cb.get_intensity_col(meas_df)
print (magn_col)
magn_moment
This conversion has not been written yet. If you have this file format and wish to convert it to the MagIC file format, please let us know.
[MagIC Database] [command line version]
To convert the binary formatted 2G Enterprises measurement files, we can use the function convert._2g_bin() in the convert_2_magic module (imported as convert).
help(convert._2g_bin)
Help on function _2g_bin in module pmagpy.convert_2_magic: _2g_bin(dir_path='.', mag_file='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', or_con='3', specnum=0, samp_con='2', corr='1', gmeths='FS-FD:SO-POM', location='unknown', inst='', user='', noave=False, input_dir='', lat='', lon='') Convert 2G binary format file to MagIC file(s) Parameters ---------- dir_path : str output directory, default "." mag_file : str input file name meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" or_con : number orientation convention, default '3', see info below specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '2', see info below corr: str default '1' gmeths : str sampling method codes, default "FS-FD:SO-POM", see info below location : str location name, default "unknown" inst : str instrument, default "" user : str user name, default "" noave : bool do not average duplicate measurements, default False (so by default, DO average) input_dir : str input file directory IF different from dir_path, default "" lat : float latitude, default "" lon : float longitude, default "" Returns --------- Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info ---------- Orientation convention: [1] Lab arrow azimuth= mag_azimuth; Lab arrow dip=-field_dip i.e., field_dip is degrees from vertical down - the hade [default] [2] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip i.e., mag_azimuth is strike and field_dip is hade [3] Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip i.e., lab arrow same as field arrow, but field_dip was a hade. [4] lab azimuth and dip are same as mag_azimuth, field_dip [5] lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90 [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip [7] all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY Sampling method codes: FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used SO-MAG orientation with magnetic compass SO-SUN orientation with sun compass
# set the input directory
input_dir='data_files/convert_2_magic/2g_bin_magic/mn1/'
mag_file='mn001-1a.dat'
convert._2g_bin(mag_file=mag_file,input_dir=input_dir,dir_path=input_dir)
importing mn001-1a adding measurement column to measurements table! -I- writing measurements records to /Users/nebula/Python/PmagPy/measurements.txt -I- 19 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/measurements.txt -I- 19 records written to measurements file
(True, 'measurements.txt')
These are measurement data for a single specimen, so we can take a quickie look at the data in an equal area projection.
help(ipmag.plot_di)
Help on function plot_di in module pmagpy.ipmag: plot_di(dec=None, inc=None, di_block=None, color='k', marker='o', markersize=20, legend='no', label='', title='', edge='') Plot declination, inclination data on an equal area plot. Before this function is called a plot needs to be initialized with code that looks something like: >fignum = 1 >plt.figure(num=fignum,figsize=(10,10),dpi=160) >ipmag.plot_net(fignum) Required Parameters ----------- dec : declination being plotted inc : inclination being plotted or di_block: a nested list of [dec,inc,1.0] (di_block can be provided instead of dec, inc in which case it will be used) Optional Parameters (defaults are used if not specified) ----------- color : the default color is black. Other colors can be chosen (e.g. 'r') marker : the default marker is a circle ('o') markersize : default size is 20 label : the default label is blank ('') legend : the default is no legend ('no'). Putting 'yes' will plot a legend. edge : marker edge color - if blank, is color of marker
meas_df=pd.read_csv(input_dir+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])
[MagIC Database] [command line version]
This program converts Micromag hysteresis files into MagIC formatted files. Because this program creates files for uploading to the MagIC database, specimens should also have sample/site/location information, which can be provided on the command line. If this information is not available, for example if this is a synthetic specimen, specify syn= True for synthetic name.
Someone named Lima Tango has measured a synthetic specimen named myspec for hysteresis and saved the data in a file named agm_magic_example.agm in the agm_magic/agm_directory folder. The backfield IRM curve for the same specimen was saved in same directory as agm_magic_example.irm. Use the function convert.agm() to convert the data into a measurements.txt output file. For the backfield IRM file, set the keyword "bak" to True. These were measured using cgs units, so be sure to set the units key word argument properly. Combine the two output files together using the instructions for combine_magic. The agm files can be plotted using hysteresis_magic but the back-field plots are broken.
help(convert.agm)
Help on function agm in module pmagpy.convert_2_magic: agm(agm_file, dir_path='.', input_dir_path='', meas_outfile='', spec_outfile='', samp_outfile='', site_outfile='', loc_outfile='', spec_infile='', samp_infile='', site_infile='', specimen='', specnum=0, samp_con='1', location='unknown', instrument='', institution='', bak=False, syn=False, syntype='', units='cgs', fmt='new', user='') Convert AGM format file to MagIC file(s) Parameters ---------- agm_file : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_outfile : str output measurement file name, default "" (default output is SPECNAME.magic) spec_outfile : str output specimen file name, default "" (default output is SPEC_specimens.txt) samp_outfile: str output sample file name, default "" (default output is SPEC_samples.txt) site_outfile : str output site file name, default "" (default output is SPEC_sites.txt) loc_outfile : str output location file name, default "" (default output is SPEC_locations.txt) samp_infile : str existing sample infile (not required), default "" site_infile : str existing site infile (not required), default "" specimen : str specimen name, default "" (default is to take base of input file name, e.g. SPEC.agm) specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below location : str location name, default "unknown" instrument : str instrument name, default "" institution : str institution name, default "" bak : bool IRM backfield curve, default False syn : bool synthetic, default False syntype : str synthetic type, default "" units : str units, default "cgs" fmt: str input format, options: ('new', 'old', 'xy', default 'new') user : user name Returns --------- Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.agm('agm_magic_example.agm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='agm.magic')
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/agm.magic -I- 284 records written to measurements file
(True, 'agm.magic')
convert.agm('agm_magic_example.irm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='irm.magic')
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_specimens.txt -I- 1 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/irm.magic -I- 41 records written to measurements file
(True, 'irm.magic')
infiles=['data_files/convert_2_magic/agm_magic/agm.magic','data_files/convert_2_magic/agm_magic/irm.magic']
ipmag.combine_magic(infiles,'data_files/convert_2_magic/agm_magic/measurements.txt')
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/measurements.txt -I- 325 records written to measurements file
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/measurements.txt'
We can look at these data using hysteresis_magic:
# read in the measurements data
meas_data=pd.read_csv('data_files/convert_2_magic/agm_magic/agm.magic',sep='\t',header=1)
# pick out the hysteresis data using the method code for hysteresis lab protocol
hyst_data=meas_data[meas_data.method_codes.str.contains('LP-HYS')]
# make the dictionary for figures that pmagplotlib likes
# make a list of specimens
specimens=hyst_data.specimen.unique()
cnt=1
for specimen in specimens:
HDD={'hyst':cnt,'deltaM':cnt+1,'DdeltaM':cnt+2}
spec_data=hyst_data[hyst_data.specimen==specimen]
# make a list of the field data
B=spec_data.meas_field_dc.tolist()
# make a list o the magnetizaiton data
M=spec_data.magn_moment.tolist()
# call the plotting function
hpars=pmagplotlib.plot_hdd(HDD,B,M,specimen)
hpars['specimen']=specimen
# print out the hysteresis parameters
print (specimen,': \n',hpars)
cnt+=3
myspec : {'hysteresis_xhf': '1.77e-05', 'hysteresis_ms_moment': '2.914e+01', 'hysteresis_mr_moment': '5.493e+00', 'hysteresis_bc': '2.195e-02', 'hysteresis_bcr': '6.702e-02', 'magic_method_codes': 'LP-BCR-HDM', 'specimen': 'myspec'}
[MagIC Database] [command line version]
Here we convert the Berkeley Geochronology Center's AutoCore format to MagIC use convert.bgc().
help(convert.bgc)
Help on function bgc in module pmagpy.convert_2_magic: bgc(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', append=False, location='unknown', site='', samp_con='1', specnum=0, meth_code='LP-NO', volume=12, user='', timezone='US/Pacific', noave=False) Convert BGC format file to MagIC file(s) Parameters ---------- mag_file : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" append : bool append output files to existing files instead of overwrite, default False location : str location name, default "unknown" site : str site name, default "" samp_con : str sample/site naming convention, default '1', see info below specnum : int number of characters to designate a specimen, default 0 meth_code : str orientation method codes, default "LP-NO" e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...] volume : float volume in ccs, default 12. user : str user name, default "" timezone : str timezone in pytz library format, default "US/Pacific" list of timezones can be found at http://pytz.sourceforge.net/ noave : bool do not average duplicate measurements, default False (so by default, DO average) Returns --------- Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY
dir_path='data_files/convert_2_magic/bgc_magic/'
convert.bgc('15HHA1-2A',dir_path=dir_path)
mag_file in bgc_magic /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/15HHA1-2A adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 21 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/measurements.txt -I- 21 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/measurements.txt')
And let's take a look
meas_df=pd.read_csv(dir_path+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])
[MagIC Database] [command line version]
To convert the CalTech format to MagIC, use convert.cit().
Craig Jones’ PaleoMag software package (http://cires.colorado.edu/people/jones.craig/PMag3.html) imports various file formats, including the ’CIT’ format developed for the Caltech lab and now used in magnetometer control software that ships with 2G magnetometers that utilized a vertical sample changer system. The documentation for the CIT sample format is here: http://cires.colorado.edu/people/jones.craig/PMag_Formats.html#SAM_format. Demagnetization data for each specimen are in their own file in a directory with all the data for a site or study. These files are strictly formatted with fields determined by the character number in the line. There must be a file with the suffix ‘.sam’ in the same directory as the specimen data files which gives details about the specimens and a list of the specimen measurementfiles in the directory.
The first line in the .sam file is a comment (in this case the site name), the second is the latitude and longitude followed by a declination correction. In these data, the declination correction was applied to the specimen orientations so the value of the declination correction is set to be 0.
For detailed description of the .sam and sample file formats, check the PaleoMag Formats website linked to above.
help(convert.cit)
Help on function cit in module pmagpy.convert_2_magic: cit(dir_path='.', input_dir_path='', magfile='', user='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', locname='unknown', sitename='', methods=['SO-MAG'], specnum=0, samp_con='3', norm='cc', oersted=False, noave=False, meas_n_orient='8', labfield=0, phi=0, theta=0) Converts CIT formated Magnetometer data into MagIC format for Analysis and contribution to the MagIC database Parameters ----------- dir_path : directory to output files to (default : current directory) input_dir_path : directory to input files (only needed if different from dir_path!) magfile : magnetometer file (.sam) to convert to MagIC (required) user : colon delimited list of analysts (default : "") meas_file : measurement file name to output (default : measurements.txt) spec_file : specimen file name to output (default : specimens.txt) samp_file : sample file name to output (default : samples.txt) site_file : site file name to output (default : site.txt) loc_file : location file name to output (default : locations.txt) locname : location name sitename : site name set by user instead of using samp_con methods : colon delimited list of sample method codes. full list here (https://www2.earthref.org/MagIC/method-codes) (default : SO-MAG) specnum : number of terminal characters that identify a specimen norm : is volume or mass normalization using cgs or si units (options : cc,m3,g,kg) (default : cc) oersted : demag step values are in Oersted noave : average measurement data or not. False is average, True is don't average. (default : False) samp_con : sample naming convention options as follows: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in sitename column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY meas_n_orient : Number of different orientations in measurement (default : 8) labfield : DC_FIELD in microTesla (default : 0) phi : DC_PHI in degrees (default : 0) theta : DC_THETA in degrees (default : 0) Returns ----------- type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)
Use the function convert.cit() to covert the CIT data files from Swanson-Hysell lab at Berkeley for the PI47 site in the data_files/convert_2_magic/cit_magic/PI47 directory. The site (PI47) was part of a data set published in Fairchild et al., (2016) (available in the MagIC database: (https://earthref.org/MagIC/11292/). The location name was “Slate Islands”, the naming convention was #2, the specimen name is specified with 1 character, we don’t wish to average replicate measurements and they were collected by drilling and with a magnetic compass (”FS-FD",and "SO-MAG”).
dir_path='data_files/convert_2_magic/cit_magic/PI47/'
convert.cit(dir_path=dir_path,
magfile='PI47-.sam',locname="Slate Islands",specnum=1,samp_con='2',
methods=['FS-FD','SO-MAG'],noave=True)
PI47- Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 266 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/specimens.txt -I- 9 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/samples.txt -I- 9 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/measurements.txt -I- 266 records written to measurements file
(True, 'measurements.txt')
We can make some Zijderveld diagrams (see zeq_magic).
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
Use the function convert.cit() to covert the CIT data files from the USGS lab at Menlo Park. The data file is in the data_files/convert_2_magic/cit_magic/USGS/bl9-1 directory, the file name is bl9-1.sam, and the analyst was Hagstrum. The location name was “Boring volcanic field”, and this site name was set by Hagstrum to BL9001 because the site name cannot be determined from the sample name with the current available options. The samples were collected by drilling and with a magnetic compass and sun compass (”FS-FD",and "SO-MAG”), the measurement are in Oersted instead of the standard milliTesla, and we don’t wish to average replicate measurements.
dir_path='data_files/convert_2_magic/cit_magic/USGS/bl9-1'
convert.cit(dir_path=dir_path,
magfile='bl9-1.sam',user='Hagstrum',locname="Boring volcanic field",
sitename='BL9001',methods=['FS-FD','SO-SM','LT-AF-Z'], oersted=True,
noave=True)
Boring Lava collection 2009 Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 63 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/specimens.txt -I- 9 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/samples.txt -I- 9 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/measurements.txt -I- 63 records written to measurements file
(True, 'measurements.txt')
We can look at the Zijderveld, etc. Diagrams with zeq_magic.
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
Use the function convert.cit() to convert the CIT data files from Ben Wiess's lab at MIT. This data was part of a set published in ESPL. "A nonmagnetic differentiated early planetary body", doi:10.1016/j.epsl.2017.03.026 The data can be found in MagIC at https://earthref.org/MagIC/11943
The data file is in the data_files/convert_2_magic/cit_magic/MIT/7325B directory, the file name is 7325B.sam, and the analyst was Wiess. The location name was “NWA 7325” with the site name coming from the sample name with the "1" convention. The samples are described with the method codes DE-VM, LP-DIR-T, LT-AF-Z, LT-NO, LT-T-Z, and SO-CMD-NORTH (see https://www2.earthref.org/MagIC/method-codes for full descriptions). We also don’t wish to average replicate measurements.
convert.cit(dir_path='data_files/convert_2_magic/cit_magic/MIT/7325B',
magfile='7325B.sam',user='Wiess',locname="NWA 7325",samp_con='1',
methods=['DE-VM', 'LP-DIR-T', 'LT-AF-Z', 'LT-NO', 'LT-T-Z', 'SO-CMD-NORTH'],
noave=True)
NWA 7325 sample B7 Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. Warning: Specimen volume set to 1.0. Warning: If volume/mass really is 1.0, set volume/mass to 1.001 Warning: specimen method code LP-NOMAG set. adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 309 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/specimens.txt -I- 9 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/samples.txt -I- 9 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/measurements.txt -I- 309 records written to measurements file
(True, 'measurements.txt')
And take a look see:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
[MagIC Database] [command line version]
If you have a data file format that is not supported, you can relabel column headers to fit the generic format as in the generic_magic example data file.
To import the generic file format, use convert.generic().
help(convert.generic)
Help on function generic in module pmagpy.convert_2_magic: generic(magfile='', dir_path='.', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', user='', labfield=0, labfield_phi=0, labfield_theta=0, experiment='', cooling_times_list=[], sample_nc=[1, 0], site_nc=[1, 0], location='unknown', lat='', lon='', noave=False, input_dir_path='') Convert generic file to MagIC file(s) Parameters ---------- mag_file : str input file name dir_path : str output directory, default "." meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" user : str user name, default "" labfield : float dc lab field (in micro tesla) labfield_phi : float declination 0-360 labfield_theta : float inclination -90 - 90 experiment : str experiment type, see info below cooling_times_list : list cooling times in [K/minutes] seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 sample_nc : list sample naming convention, default [1, 0], see info below site_nc : list site naming convention, default [1, 0], see info below location : str location name, default "unknown" lat : float latitude, default "" lon : float longitude, default "" noave : bool do not average duplicate measurements, default False (so by default, DO average) input_dir_path : str input file directory IF different from dir_path, default "" Info -------- Experiment type: Demag: AF and/or Thermal PI: paleointenisty thermal experiment (ZI/IZ/IZZI) ATRM n: ATRM in n positions (n=6) AARM n: AARM in n positions CR: cooling rate experiment The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional) where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps. XXX.00 is optional zerofield baseline. XXX.70 is alteration check. if using this type, you must also provide cooling rates in [K/minutes] in cooling_times_list seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 No need to specify the cooling rate for the zerofield But users need to make sure that there are no duplicate meaurements in the file NLT: non-linear-TRM experiment Specimen-sample naming convention: X determines which kind of convention (initial characters, terminal characters, or delimiter Y determines how many characters to remove to go from specimen --> sample OR which delimiter to use X=0 Y=n: specimen is distinguished from sample by n initial characters. (example: generic(samp_nc=[0, 4], ...) if n=4 then and specimen = mgf13a then sample = mgf13) X=1 Y=n: specimen is distiguished from sample by n terminate characters. (example: generic(samp_nc=[1, 1], ...)) if n=1 then and specimen = mgf13a then sample = mgf13) X=2 Y=c: specimen is distinguishing from sample by a delimiter. (example: generic([2, "-"])) if c=- then and specimen = mgf13-a then sample = mgf13) default: sample is the same as specimen name Sample-site naming convention: X determines which kind of convention (initial characters, terminal characters, or delimiter Y determines how many characters to remove to go from sample --> site OR which delimiter to use X=0 Y=n: sample is distiguished from site by n initial characters. (example: generic(site_nc=[0, 3])) if n=3 then and sample = mgf13 then sample = mgf) X=1 Y=n: sample is distiguished from site by n terminate characters. (example: generic(site_nc=[1, 2])) if n=2 and sample = mgf13 then site = mgf) X=2 Y=c: specimen is distiguishing from sample by a delimiter. (example: generic(site_nc=[2, "-"])) if c='-' and sample = 'mgf-13' then site = mgf) default: site name is the same as sample name
convert.generic(magfile='data_files/convert_2_magic/generic_magic/generic_magic_example.txt',
experiment='PI',dir_path='data_files/convert_2_magic/generic_magic')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 23 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/specimens.txt -I- 2 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/sites.txt -I- 2 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/measurements.txt -I- 23 records written to measurements file
(True, 'measurements.txt')
# let's take a look
dir_path='data_files/convert_2_magic/generic_magic/'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
[MagIC Database] [command line version]
To import the Hebrew University, Jerusalem, Israel file format to MagIC, use convert.huji().
help(convert.huji)
Help on function huji in module pmagpy.convert_2_magic: huji(magfile='', dir_path='.', input_dir_path='', datafile='', codelist='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', user='', specnum=0, samp_con='1', labfield=0, phi=0, theta=0, location='', CR_cooling_times=None, noave=False) Convert HUJI format file to MagIC file(s) Parameters ---------- magfile : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" datafile : str HUJI datafile with sample orientations, default "" codelist : str colon-delimited protocols, include all that apply see info below meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" user : str user name, default "" specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below labfield : float dc lab field (in micro tesla) labfield_phi : float declination 0-360 labfield_theta : float inclination -90 - 90 location : str location name, default "unknown" CR_cooling_times : list default None cooling times in [K/minutes] seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 noave : bool do not average duplicate measurements, default False (so by default, DO average) Info -------- Code list: AF: af demag T: thermal including thellier but not trm acquisition N: NRM only TRM: trm acquisition ANI: anisotropy experiment CR: cooling rate experiment. The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional) where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps. XXX.00 is optional zerofield baseline. XXX.70 is alteration check. syntax in sio_magic is: -LP CR xxx,yyy,zzz,.....xx where xx, yyy,zzz...xxx are cooling time in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 if you use a zerofield step then no need to specify the cooling rate for the zerofield Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
dir_path='data_files/convert_2_magic/huji_magic/'
convert.huji(dir_path=dir_path,
magfile='Massada_AF_HUJI_new_format.txt',codelist='T')
-W- Identical treatments in file Massada_AF_HUJI_new_format.txt magfile line 818: specimen M5-119E, treatment 0 ignoring the first. -I- done reading file Massada_AF_HUJI_new_format.txt adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 616 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/specimens.txt -I- 56 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/samples.txt -I- 56 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/sites.txt -I- 29 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/measurements.txt -I- 616 records written to measurements file
(True, 'measurements.txt')
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False, n_plots=10)
(True, [])
[MagIC Database] [command line version]
To convert a Hebrew University Jersalem, Israel sample format to MagIC, use convert.huji_sample().
help(convert.huji_sample)
Help on function huji_sample in module pmagpy.convert_2_magic: huji_sample(orient_file, meths='FS-FD:SO-POM:SO-SUN', location_name='unknown', samp_con='1', ignore_dip=True, data_model_num=3, samp_file='samples.txt', site_file='sites.txt', dir_path='.', input_dir_path='') Convert HUJI sample file to MagIC file(s) Parameters ---------- orient_file : str input file name meths : str colon-delimited sampling methods, default FS-FD:SO-POM:SO-SUN for more options, see info below location : str location name, default "unknown" samp_con : str sample/site naming convention, default '1', see info below ignore_dip : bool set sample az/dip to 0, default True data_model_num : int MagIC data model 2 or 3, default 3 samp_file : str sample file name to output (default : samples.txt) site_file : str site file name to output (default : site.txt) dir_path : str output directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, file name written) Info -------- Sampling method codes: FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used SO-MAG orientation with magnetic compass Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.huji_sample('magdelkrum_datafile.txt',
dir_path='data_files/convert_2_magic/huji_magic/')
-I- reading in: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/magdelkrum_datafile.txt 57 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/samples.txt 57 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/sites.txt Sample info saved in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/samples.txt Site info saved in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/sites.txt
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/huji_magic/samples.txt')
help(ipmag.combine_magic)
Help on function combine_magic in module pmagpy.ipmag: combine_magic(filenames, outfile, data_model=3, magic_table='measurements', output_dir_path='.', input_dir_path='') Takes a list of magic-formatted files, concatenates them, and creates a single file. Returns output filename if the operation was successful. Parameters ----------- filenames : list of MagIC formatted files outfile : name of output file data_model : data model number (2.5 or 3), default 3 magic_table : name of magic table, default 'measurements' Returns ---------- outfile name if success, False if failure
[MagIC Database] [command line version]
The AGICO JR6 spinner magnetometer has two output formats, the .jr6 and the .txt. Here we illustrate the conversion of the .jr6 format. There are data from two different studies in the example folder. One (from Anita di Chiara) has the suffix '.JR6' and the other (from Roi Granot) are lower case (.jr6'). Each file has the data from a single specimen's experiment. So, we can convert Anita's data to a series of MagIC formatted measurement files, combine them with ipmag.combine_magic and look at them with Demag GUI (on the command line) or zeq_magic within the notebook.
help(convert.jr6_jr6)
Help on function jr6_jr6 in module pmagpy.convert_2_magic: jr6_jr6(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', specnum=1, samp_con='1', location='unknown', lat='', lon='', noave=False, meth_code='LP-NO', volume=12, JR=False, user='') Convert JR6 .jr6 files to MagIC file(s) Parameters ---------- mag_file : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below location : str location name, default "unknown" lat : float latitude, default "" lon : float longitude, default "" noave : bool do not average duplicate measurements, default False (so by default, DO average) meth_code : str colon-delimited method codes, default "LP-NO" volume : float volume in ccs, default 12 JR : bool IODP samples were measured on the JOIDES RESOLUTION, default False user : str user name, default "" Returns --------- Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY
Let's start with Anita's files
dir_path='data_files/convert_2_magic/jr6_magic/'
files=os.listdir(dir_path)
meas_files,spec_files,samp_files,site_files=[],[],[],[]
for file in files:
if '.JR6' in file:
print (file)
stem=file.split('.')[0]
meas_file=stem+'_measurements.txt' # make a unique measurements file
spec_file=stem+'_specimens.txt'
samp_file=stem+'_samples.txt'
site_file=stem+'_sites.txt'
convert.jr6_jr6(file,dir_path=dir_path,
meas_file=meas_file,spec_file=spec_file,samp_file=samp_file,
site_file=site_file,user='Anita')
meas_files.append(dir_path+meas_file) # save the file name to a list
spec_files.append(dir_path+spec_file)
samp_files.append(dir_path+samp_file)
site_files.append(dir_path+site_file)
# combine the files
ipmag.combine_magic(meas_files,dir_path+'measurements.txt')
ipmag.combine_magic(spec_files,dir_path+'specimens.txt',magic_table='specimens')
ipmag.combine_magic(samp_files,dir_path+'samples.txt',magic_table='samples')
ipmag.combine_magic(site_files,dir_path+'sites.txt',magic_table='sites')
SML02.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 104 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML02_specimens.txt -I- 15 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML02_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML02_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML02_measurements.txt -I- 104 records written to measurements file SML03.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 105 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML03_specimens.txt -I- 15 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML03_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML03_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML03_measurements.txt -I- 105 records written to measurements file SML01.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 70 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML01_specimens.txt -I- 10 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML01_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML01_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML01_measurements.txt -I- 70 records written to measurements file SML04.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 102 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML04_specimens.txt -I- 15 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML04_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML04_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML04_measurements.txt -I- 102 records written to measurements file SML05.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 102 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML05_specimens.txt -I- 15 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML05_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML05_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML05_measurements.txt -I- 102 records written to measurements file SML07.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 111 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML07_specimens.txt -I- 16 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML07_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML07_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML07_measurements.txt -I- 111 records written to measurements file SML06.JR6 adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 98 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML06_specimens.txt -I- 14 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML06_samples.txt -I- 2 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML06_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/SML06_measurements.txt -I- 98 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/measurements.txt -I- 692 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/specimens.txt -I- 100 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/samples.txt -I- 14 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt -I- 7 records written to sites file
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
Now we can do Roi's files
dir_path='data_files/convert_2_magic/jr6_magic/'
files=os.listdir(dir_path)
meas_files,spec_files,samp_files,site_files=[],[],[],[]
for file in files:
if file.endswith('.jr6'):
stem=file.split('.')[0]
meas_file=stem+'_measurements.txt' # make a unique measurements file
spec_file=stem+'_specimens.txt'
samp_file=stem+'_samples.txt'
site_file=stem+'_sites.txt'
convert.jr6_jr6(file,dir_path=dir_path,
meas_file=meas_file,spec_file=spec_file,samp_file=samp_file,
site_file=site_file,user='Roi')
meas_files.append(dir_path+meas_file) # save the file name to a list
spec_files.append(dir_path+spec_file)
samp_files.append(dir_path+samp_file)
site_files.append(dir_path+site_file)
# combine the files
ipmag.combine_magic(meas_files,dir_path+'measurements.txt')
ipmag.combine_magic(spec_files,dir_path+'specimens.txt',magic_table='specimens')
ipmag.combine_magic(samp_files,dir_path+'samples.txt',magic_table='samples')
ipmag.combine_magic(site_files,dir_path+'sites.txt',magic_table='sites')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 499 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_specimens.txt -I- 42 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_samples.txt -I- 19 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_sites.txt -I- 10 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_measurements.txt -I- 499 records written to measurements file measurement type unknown -01A adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 655 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_specimens.txt -I- 57 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_samples.txt -I- 17 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_sites.txt -I- 10 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_measurements.txt -I- 655 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/measurements.txt -I- 1154 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/specimens.txt -I- 98 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/samples.txt -I- 30 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt -I- 10 records written to sites file
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
(True, [])
[MagIC Database] [command line version]
We can repeat the exercise for the JR6 .txt format using convert.jr6_txt().
help(convert.jr6_txt)
Help on function jr6_txt in module pmagpy.convert_2_magic: jr6_txt(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', user='', specnum=1, samp_con='1', location='unknown', lat='', lon='', noave=False, volume=12, timezone='UTC', meth_code='LP-NO') Converts JR6 .txt format files to MagIC measurements format files. Parameters ---------- mag_file : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_file : str output measurement file name, default "measurements.txt" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" user : str user name, default "" specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below location : str location name, default "unknown" lat : float latitude, default "" lon : float longitude, default "" noave : bool do not average duplicate measurements, default False (so by default, DO average) volume : float volume in ccs, default 12 timezone : timezone of date/time string in comment string, default UTC meth_code : str default "LP-NO" Returns --------- Tuple : (True or False indicating if conversion was sucessful, meas_file name written)
There are only data from Roi Granot in this format. The measurement values should be identical to the convert.jr6_jr6() function on .jr6 files with the same stem. Additional columns will be found when converting the .JR6 format as that format contains more information than the .txt files.
dir_path='data_files/convert_2_magic/jr6_magic/'
files=['AF.txt','TRM.txt','AP12.txt']
meas_files,spec_files,samp_files,site_files=[],[],[],[]
for file in files:
print (file)
stem=file.split('.')[0]
meas_file=stem+'_measurements.txt' # make a unique measurements file
spec_file=stem+'_specimens.txt'
samp_file=stem+'_samples.txt'
site_file=stem+'_sites.txt'
convert.jr6_txt(file,dir_path=dir_path,
meas_file=meas_file,spec_file=spec_file,samp_file=samp_file,
site_file=site_file,user='Roi')
meas_files.append(dir_path+meas_file) # save the file name to a list
spec_files.append(dir_path+spec_file)
samp_files.append(dir_path+samp_file)
site_files.append(dir_path+site_file)
# combine the files
ipmag.combine_magic(meas_files,dir_path+'measurements.txt')
ipmag.combine_magic(spec_files,dir_path+'specimens.txt',magic_table='specimens')
ipmag.combine_magic(samp_files,dir_path+'samples.txt',magic_table='samples')
ipmag.combine_magic(site_files,dir_path+'sites.txt',magic_table='sites')
AF.txt -I- Using less strict decoding for /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF.txt, output may have formatting errors adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 655 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_specimens.txt -I- 57 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_samples.txt -I- 17 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_sites.txt -I- 10 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AF_measurements.txt -I- 655 records written to measurements file TRM.txt -I- Using less strict decoding for /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM.txt, output may have formatting errors adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 499 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_specimens.txt -I- 42 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_samples.txt -I- 19 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_sites.txt -I- 10 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/TRM_measurements.txt -I- 499 records written to measurements file AP12.txt -I- Using less strict decoding for /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AP12.txt, output may have formatting errors adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 69 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AP12_specimens.txt -I- 9 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AP12_samples.txt -I- 6 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AP12_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/AP12_measurements.txt -I- 69 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/measurements.txt -I- 1223 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/specimens.txt -I- 107 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/samples.txt -I- 36 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt -I- 11 records written to sites file
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/jr6_magic/sites.txt'
ipmag.zeq_magic(meas_file='AP12_measurements.txt',input_dir_path=dir_path, save_plots=False)
No plots could be created for specimen: AP12-01A No plots could be created for specimen: AP12-02A No plots could be created for specimen: AP12-03A
(True, [])
[MagIC Database] [command line version]
Someone took a set of samples from a dike margin in the Troodos Ophiolite and measured their anisotropy of magnetic susceptibility on an a Kappabridge KLY 2.0 instrument in the SIO laboratory. An example of the data file format is in k15_magic.
The first line of each set of four has the specimen name, azimuth, plunge, and bedding strike and dip the next three lines are sets of five measurements in the 15 positions recommended by Jelinek (1977):
Image('data_files/Figures/meas15.png')
The 15 measurements for each specimen, along with orientation information and the specimen name were saved in the file data_files/k15_magic/k15_example.dat.
To convert 15 measurement anisotropy of magnetic susceptibility file format to MagIC, use convert.k15().
help(convert.k15)
Help on function k15 in module pmagpy.convert_2_magic: k15(k15file, dir_path='.', input_dir_path='', meas_file='measurements.txt', aniso_outfile='specimens.txt', samp_file='samples.txt', result_file='rmag_anisotropy.txt', specnum=0, sample_naming_con='1', location='unknown', data_model_num=3) converts .k15 format data to MagIC format. assumes Jelinek Kappabridge measurement scheme. Parameters ---------- k15file : str input file name dir_path : str output file directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_file : str output measurement file name, default "measurements.txt" aniso_outfile : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" aniso_results_file : str output result file name, default "rmag_results.txt", data model 2 only specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below location : str location name, default "unknown" data_model_num : int MagIC data model [2, 3], default 3 Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, samp_file name written) Info -------- Infile format: name [az,pl,strike,dip], followed by 3 rows of 5 measurements for each specimen Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXXYYY: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help.
convert.k15('k15_example.dat',dir_path='data_files/convert_2_magic/k15_magic/',
location='Troodos Ophiolite')
8 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/samples.txt 48 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/specimens.txt 120 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/measurements.txt Data saved to: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/measurements.txt, /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/specimens.txt, /Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/samples.txt
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/k15_magic/measurements.txt')
ipmag.aniso_magic_nb(infile='specimens.txt',dir_path='data_files/convert_2_magic/k15_magic/')
1 saved in Troodos Ophiolite_s_aniso-data.png 2 saved in Troodos Ophiolite_s_aniso-conf.png
(True, ['Troodos Ophiolite_s_aniso-data.png', 'Troodos Ophiolite_s_aniso-conf.png'])
[MagIC Database] [command line version]
The program AMSSpin available for downloading from http://earthref.org/ERDA/940/ generates data for the Kappabridge KLY4S spinning magnetic susceptibility instrument as described by Gee et al. (2008).
Output files are in the format of the file KLY4S_magic_example.dat (found in the measurement_import/kly4s_magic folder).
The columns in the example file are:
Specimen S_1 S_2 S_3 S_4 S_5 S_6 χb(μSI) date time user
To convert the Agico Kappabridge KLY4S files generated by the SIO labview program (written by Jeff Gee), use convert.kly4s(). This function will create the files needed by the MagIC database and the data can be plotted using aniso_magic. If you were to import the sample files from the LIMS data base for these samples, you could plot them versus depth, or as equal area projections using ani_depthplot and aniso_magic respectively.
help(convert.kly4s)
Help on function kly4s in module pmagpy.convert_2_magic: kly4s(infile, specnum=0, locname='unknown', inst='SIO-KLY4S', samp_con='1', or_con='3', user='', measfile='measurements.txt', aniso_outfile='rmag_anisotropy.txt', samp_infile='', spec_infile='', spec_outfile='specimens.txt', azdip_infile='', dir_path='.', input_dir_path='', data_model_num=3, samp_outfile='samples.txt', site_outfile='sites.txt') converts files generated by SIO kly4S labview program to MagIC formated Parameters ---------- infile : str input file name specnum : int number of characters to designate a specimen, default 0 locname : str location name, default "unknown" samp_con : str sample/site naming convention, default '1', see info below or_con : number orientation convention, default '3', see info below user : str user name, default "" measfile : str output measurement file name, default "measurements.txt" aniso_outfile : str output anisotropy file name, default "rmag_anisotropy.txt", data model 2 only samp_infile : str existing sample infile (not required), default "" spec_infile : str existing site infile (not required), default "" spec_outfile : str output specimen file name, default "specimens.txt" azdip_infile : str AZDIP file with orientations, will create sample output file dir_path : str output directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" data_model_num : int MagIC data model 2 or 3, default 3 samp_outfile : str sample output filename, default "samples.txt" site_outfile : str site output filename, default "sites.txt" Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info ---------- Orientation convention: [1] Lab arrow azimuth= mag_azimuth; Lab arrow dip=-field_dip i.e., field_dip is degrees from vertical down - the hade [default] [2] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip i.e., mag_azimuth is strike and field_dip is hade [3] Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip i.e., lab arrow same as field arrow, but field_dip was a hade. [4] lab azimuth and dip are same as mag_azimuth, field_dip [5] lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90 [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip [7] all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help. Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.kly4s('KLY4S_magic_example.dat',
dir_path='data_files/convert_2_magic/kly4s_magic/')
anisotropy data added to specimen records 52 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/kly4s_magic/specimens.txt specimen information written to new file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/kly4s_magic/specimens.txt 26 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/kly4s_magic/measurements.txt measurement data saved in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/kly4s_magic/measurements.txt 26 records written to file samples.txt 26 records written to file sites.txt
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/kly4s_magic/measurements.txt')
ipmag.aniso_magic_nb(infile='specimens.txt',dir_path='data_files/convert_2_magic/kly4s_magic/')
1 saved in unknown_s_aniso-data.png 2 saved in unknown_s_aniso-conf.png
(True, ['unknown_s_aniso-data.png', 'unknown_s_aniso-conf.png'])
[MagIC Database] [command line version]
To convert Lamont-Doherty Earth Observatory data files to MagIC, use convert.ldeo().
NB: this doesn't seem to work properly at all.
help(convert.ldeo)
Help on function ldeo in module pmagpy.convert_2_magic: ldeo(magfile, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', specnum=0, samp_con='1', location='unknown', codelist='', coil='', arm_labfield=5e-05, trm_peakT=873.0, peakfield=0, labfield=0, phi=0, theta=0, mass_or_vol='v', noave=0) converts Lamont Doherty Earth Observatory measurement files to MagIC data base model 3.0 Parameters _________ magfile : input measurement file dir_path : output directory path, default "." input_dir_path : input file directory IF different from dir_path, default "" meas_file : output file measurement file name, default "measurements.txt" spec_file : output file specimen file name, default "specimens.txt" samp_file : output file sample file name, default "samples.txt" site_file : output file site file name, default "sites.txt" loc_file : output file location file name, default "locations.txt" specnum : number of terminal characters distinguishing specimen from sample, default 0 samp_con : sample/site naming convention, default "1" "1" XXXXY: where XXXX is an arbitr[ary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] "2" XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) "3" XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) "4-Z" XXXX[YYY]: YYY is sample designation with Z characters from site XXX "5" site name same as sample "6" site is entered under a separate column NOT CURRENTLY SUPPORTED "7-Z" [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. "8" synthetic - has no site name "9" ODP naming convention codelist : colon delimited string of lab protocols (e.g., codelist="AF"), default "" AF: af demag T: thermal including thellier but not trm acquisition S: Shaw method I: IRM (acquisition) N: NRM only TRM: trm acquisition ANI: anisotropy experiment D: double AF demag G: triple AF demag (GRM protocol) coil : 1,2, or 3 unist of IRM field in volts using ASC coil #1,2 or 3, default "" arm_labfield : dc field for ARM in tesla, default 50e-6 peakfield : peak af field for ARM, default 873. trm_peakT : peak temperature for TRM, default 0 labfield : lab field in tesla for TRM, default 0 phi, theta : direction of lab field, default 0, 0 mass_or_vol : is the parameter in the file mass 'm' or volume 'v', default "v" noave : boolean, if False, average replicates, default False Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Effects _______ creates MagIC formatted tables
convert.ldeo('ldeo_magic_example.dat',codelist='AF',
dir_path='data_files/convert_2_magic/ldeo_magic/')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 503 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/specimens.txt -I- 35 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/samples.txt -I- 35 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/sites.txt -I- 35 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/measurements.txt -I- 503 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/ldeo_magic/measurements.txt')
ipmag.zeq_magic(input_dir_path='data_files/convert_2_magic/ldeo_magic/', save_plots=False)
(True, [])
[MagIC Database] [command line version]
To convert the Liverpool university database format to MagIC use convert.livdb().
Here we have several experiment types as examples as examples.
help(convert.livdb)
Help on function livdb in module pmagpy.convert_2_magic: livdb(input_dir_path, output_dir_path='.', meas_out='measurements.txt', spec_out='specimens.txt', samp_out='samples.txt', site_out='sites.txt', loc_out='locations.txt', samp_name_con='sample=specimen', samp_num_chars=0, site_name_con='site=sample', site_num_chars=0, location_name='', data_model_num=3) Search input directory for Livdb .csv or .livdb files and convert them to MagIC format. Input directory should contain only input files for one location. Parameters ---------- input_dir_path : str input directory with .csv or .livdb files to import output_dir_path : str directory to output files, default "." meas_out : str output measurement file name, default "measurements.txt" spec_out : str output specimen file name, default "specimens.txt" samp_out: str output sample file name, default "samples.txt" site_out : str output site file name, default "sites.txt" loc_out : str output location file name, default "locations.txt" samp_name_con : str specimen --> sample naming convention, default 'sample=specimen' options: {1: 'sample=specimen', 2: 'no. of terminate characters', 3: 'character delimited'} samp_num_chars : int or str if using 'no. of terminate characters' or 'character delimited', provide the number of characters or the character delimiter site_name_con : str sample --> site naming convention, default 'site=sample' options: {1: 'site=sample', 2: 'no. of terminate characters', 3: 'character delimited'} site_num_chars : int or str if using 'no. of terminate characters' or 'character delimited', provide the number of characters or the character delimiter locname : str location name, default "" data_model_num : int MagIC data model 2 or 3, default 3 Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, file name written) Input file format ----------------- # -------------------------------------- # Read the file # # Livdb Database structure # # HEADER: # 1) First line is the header. # The header includes 19 fields delimited by comma (',') # Notice: space is not a delimiter ! # In the list below the delimiter is not used, and the conversion script assumes comma delimited file # # Header fields: # Sample code (string): (delimiter = space+) # Sample Dip (degrees): (delimiter = space) # Sample Dec (degrees): (delimiter = space) # Height (meters): (delimiter = space) # Position (no units): (delimiter = space) # Thickness (meters): (delimiter = space) # Unit Dip (aka tilt) (degrees): (delimiter = space) # Unit Dip Direction (aka Direction) (degrees): (delimiter = space) # Site Latitude (decimal degrees): (delimiter = space) # Site Longitude (decimal degrees): (delimiter = space) # Experiment Type (string): (delimiter = |) # Name of measurer (string): (delimiter = |) # Magnetometer name (string): (delimiter = |) # Demagnetiser name (string): (delimiter = |) # Specimen/Experiment Comment (string): (delimiter = |) # Database version (integer): (delimiter = |) # Conversion Version (string): (delimiter = |) # Sample Volume (cc): (delimiter = |) # Sample Density (kg/m^3): (delimiter = |) # # # BODY: # 1) Body includes 22 fields delimited by comma (',') # 2) Body ends with an "END" statment # # Body fields: # Treatment (aka field) (mT / deg C / 10-2 W): (delimiter = space) # Microwave Power (W) : (delimiter = space) # Microwave Time (s) : (delimiter = space) # X (nAm^2): (delimiter = space) # Y (nAm^2): (delimiter = space) # Z (nAm^2): (delimiter = space) # Mass g: (delimiter = space) # Applied field intensity (micro_T): (delimiter = space) # Applied field Dec (degrees): (delimiter = space) # Applied Field Inc (degrees): (delimiter = space) # Measurement Date (DD-MM-YYYY) or (DD/MM/YYYY) #### CHECK !! ## (delimiter = |) # Measurement Time (HH:SS:MM) (delimiter = |) # Measurement Remark (string) (delimiter = |) # Step Number (integer) (delimiter = |) # Step Type (string) (Z/I/P/T/O/NRM) (delimiter = |) # Tristan Gain (integer) (delimiter = |) # Microwave Power Integral (W.s) (delimiter = |) # JR6 Error(percent %) (delimiter = |) # FiT Smm (?) (delimiter = |) # Utrecht Error (percent %) (delimiter = |) # AF Demag/Remag Peak Field (mT) (delimiter = |) # TH Demag/Remag Peak Temperature (deg C) (delimiter = |) # ------------------------------------------------------------- # -------------------------------------- # Important assumptions: # (1) The program checks if the same treatment appears more than once (a measurement is repeated twice). # If yes, then it takes only the second one and ignores the first. # (2) –99 and 999 are codes for N/A # (3) The "treatment step" for Thermal Thellier experiment is taken from the "TH Demag/Remag Peak Temperature" # (4) The "treatment step" for Microwave Thellier experiment is taken from the "Step Number" # (5) As there might be contradiction between the expected treatment (i.e. Z,I,P,T,A assumed by the experiment type) # and "Step Type" field due to typos or old file formats: # The program concludes the expected treatment from the following: # ("Experiment Type) + ("Step Number" or "TH Demag/Remag Peak Temperature") + (the order of the measurements). # The conversion script will spit out a WARNING message in a case of contradiction. # (6) If the program finds AF demagnetization before the infield ot zerofield steps: # then assumes that this is an AFD step domne before the experiment. # (7) The prgram ignores microwave fields (aka field,Microwave Power,Microwave Time) in Thermal experiments. And these fields will not be converted # to MagIC. # (8) NRM step: NRM step is regonized either by "Applied field intensity"=0 and "Applied field Dec" =0 and "Applied Field Inc"=0 # or if "Step Type" = NRM # # # # ------------------------------------------------------------- # -------------------------------------- # Script was tested on the following protocols: # TH-PI-IZZI+ [November 2013, rshaar] # MW-PI-C++ [November 2013, rshaar] # MW-PI-IZZI+ ]November 2013, rshaar] # # Other protocols should be tested before use. # # # # -------------------------------------------------------------
Here's an example for an IZZI style, thermal experiment:
convert.livdb('data_files/convert_2_magic/livdb_magic/TH_IZZI+/',
output_dir_path='data_files/convert_2_magic/livdb_magic/TH_IZZI+',
site_name_con=2,site_num_chars=3)
Open file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/ATPI_Thellier.livdb Found a repeating measurement at line 18, sample ATPIPV26-15A. taking the last one Found a repeating measurement at line 19, sample ATPIPV31-7B . taking the last one 659 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/measurements.txt 27 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/specimens.txt -I- Removing non-MagIC column names from measurements: sample site location -I- Removing non-MagIC column names from specimens: height site location -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/measurements.txt -I- 659 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/specimens.txt -I- 27 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/samples.txt -I- 27 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/sites.txt -I- 7 records written to sites file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/TH_IZZI+/measurements.txt')
ipmag.thellier_magic(input_dir_path='data_files/convert_2_magic/livdb_magic/TH_IZZI+',
save_plots=False, interactive=False, n_specs=5)
ATPIPV04-1A ATPIPV04-6A ATPIPV04-7N ATPIPV14-1A ATPIPV14-2A
(True, [])
here's one for microwave "C+" experiment
convert.livdb('data_files/convert_2_magic/livdb_magic/MW_C+/',
output_dir_path='data_files/convert_2_magic/livdb_magic/MW_C+',
site_name_con=2,site_num_chars=3)
Open file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/CHEV.livdb 23 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/measurements.txt 1 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/specimens.txt -I- Removing non-MagIC column names from measurements: sample site location -I- Removing non-MagIC column names from specimens: height site location -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/measurements.txt -I- 23 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/specimens.txt -I- 1 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/sites.txt -I- 1 records written to sites file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_C+/measurements.txt')
An example for both microwave IZZI and C++:
convert.livdb('data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/',
output_dir_path='data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++',
samp_name_con='2', samp_num_chars=1,site_name_con=2,site_num_chars=1)
Open file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/NVPA.livdb -W- WARNING sample NVPADC18A2 treatment= 13. Step Type is O but the program assumes LT-M-I -W- WARNING sample NVPADC18A2 treatment= 13. Step Type is O but the program assumes LT-M-I -W- WARNING sample NVPADC18A2 treatment= 13. Step Type is Z but the program assumes LT-PMRM-MD -W- livdb.py does not support this experiment type yet. Please report your issue on https://github.com/PmagPy/PmagPy/issues -W- WARNING sample NVPATF16C2 treatment= 1. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 3. Step Type is I but the program assumes LT-M-Z -W- WARNING sample NVPATF16C2 treatment= 3. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 2. Step Type is P but the program assumes LT-PMRM-MD -W- WARNING sample NVPATF16C2 treatment= 4. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 4. Step Type is I but the program assumes LT-M-Z -W- WARNING sample NVPATF16C2 treatment= 7. Step Type is I but the program assumes LT-M-Z -W- WARNING sample NVPATF16C2 treatment= 7. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 5. Step Type is P but the program assumes LT-PMRM-MD -W- WARNING sample NVPATF16C2 treatment= 8. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 8. Step Type is I but the program assumes LT-M-Z -W- WARNING sample NVPATF16C2 treatment= 11. Step Type is I but the program assumes LT-M-Z -W- WARNING sample NVPATF16C2 treatment= 11. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 9. Step Type is P but the program assumes LT-PMRM-MD -W- WARNING sample NVPATF16C2 treatment= 12. Step Type is Z but the program assumes LT-M-I -W- WARNING sample NVPATF16C2 treatment= 12. Step Type is I but the program assumes LT-M-Z 368 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/measurements.txt 10 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/specimens.txt -I- Removing non-MagIC column names from measurements: sample site location -I- Removing non-MagIC column names from specimens: height site location -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/measurements.txt -I- 368 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/specimens.txt -I- 10 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/samples.txt -I- 9 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/sites.txt -I- 9 records written to sites file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_IZZI+andC++/measurements.txt')
An example for both microwave OT+:
convert.livdb('data_files/convert_2_magic/livdb_magic/MW_OT+/',
output_dir_path='data_files/convert_2_magic/livdb_magic/MW_OT+',
site_name_con=2,site_num_chars=3)
Open file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/16-1.livdb 45 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/measurements.txt 1 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/specimens.txt -I- Removing non-MagIC column names from measurements: sample site location -I- Removing non-MagIC column names from specimens: height site location -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/measurements.txt -I- 45 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/specimens.txt -I- 1 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/sites.txt -I- 1 records written to sites file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_OT+/measurements.txt')
And an example for MW_P experiments.
convert.livdb('data_files/convert_2_magic/livdb_magic/MW_P/',
output_dir_path='data_files/convert_2_magic/livdb_magic/MW_P',
site_name_con=2,site_num_chars=3)
Open file: /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/perp.csv 73 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/measurements.txt 4 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/specimens.txt -I- Removing non-MagIC column names from measurements: sample site location -I- Removing non-MagIC column names from specimens: height site location -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/measurements.txt -I- 73 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/specimens.txt -I- 4 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/samples.txt -I- 4 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/sites.txt -I- 2 records written to sites file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/livdb_magic/MW_P/measurements.txt')
Now you can look at these data (except for MW_P) with thellier_gui or thellier_magic.
[MagIC Database] [command line version]
To convert a Curie Temperature experiment to MagIC, use convert.mst(). The data file format should be a space delimited file with temperature and magnetization couplets.
help(convert.mst)
Help on function mst in module pmagpy.convert_2_magic: mst(infile, spec_name='unknown', dir_path='.', input_dir_path='', meas_file='measurements.txt', samp_infile='samples.txt', user='', specnum=0, samp_con='1', labfield=0.5, location='unknown', syn=False, data_model_num=3) Convert MsT data (T,M) to MagIC measurements format files Parameters ---------- infile : str input file name specimen : str specimen name, default "unknown" dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" meas_file : str output measurement file name, default "measurements.txt" samp_infile : str existing sample infile (not required), default "samples.txt" user : str user name, default "" specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below labfield : float DC_FIELD in Tesla, default : .5 location : str location name, default "unknown" syn : bool synthetic, default False data_model_num : int MagIC data model 2 or 3, default 3 Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.mst('curie_example.dat',samp_con="5",
dir_path='data_files/convert_2_magic/mst_magic/')
560 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/measurements.txt results put in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/measurements.txt -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/locations.txt -I- 1 records written to locations file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/mst_magic/measurements.txt')
We can now use curie to plot the data.
ipmag.curie(path_to_file='data_files/convert_2_magic/mst_magic/',file_name='measurements.txt',magic=True)
second derivative maximum is at T=205
[MagIC Database] [command line version]
This format is the one used to import .PMD formatted magnetometer files (used for example in the PaleoMac software of Cogné, 2003) into the MagIC format. (See http://www.ipgp.fr/~cogne/pub/paleomac/PMhome.html for the PaleoMac home page. The version of these files that pmd_magic expects (UCSC version) contains demagnetization data for a single specimen and has a format as in the example file in ../measurement_import/pmd_magic/PMD/ss0207a.pmd
The first line is a comment line. The second line has the specimen name, the core azimuth (a=) and plunge (b=) which are assumed to be the lab arrow azimuth and plunge (Orientation scheme #4)D. The third line is a header explaining the columns in the file.
Use convert.pmd() to convert the file ss0101a.pmd in the directory ’PMD’ in the ’pmd_magic’ folder of the measurement_import directory in the example data_files directory. These were taken at a location named ’Summit Springs’ and have a naming convention of the type XXXX[YYY], where YYY is sample designation with Z characters from site XXX, or naming convention # 4-2. A single character distinguishes the specimen from the sample (specnum=1). All samples were oriented with a magnetic compass.
help(convert.pmd)
Help on function pmd in module pmagpy.convert_2_magic: pmd(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', lat='', lon='', specnum=0, samp_con='1', location='unknown', noave=0, meth_code='LP-NO') converts PMD (Enkin) format files to MagIC format files Parameters ---------- mag_file : str input file name, required dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" lat : float or str latitude, default "" lon : float or str longitude, default "" specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '1', see info below location : str location name, default "unknown" noave : bool do not average duplicate measurements, default False (so by default, DO average) meth_code : str default "LP-NO" e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...] Returns --------- Tuple : (True or False indicating if conversion was sucessful, file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.pmd('ss0207a.pmd',dir_path='data_files/convert_2_magic/pmd_magic/PMD/',
samp_con='4-2',location='Summit Springs',specnum=1)
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 8 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/measurements.txt -I- 8 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/pmd_magic/PMD/measurements.txt')
[MagIC Database] [command line version]
This program allows conversion of the SIO format magnetometer files to the MagIC common measurements format. The columns in the example data file are:
Specimen treatment intensity declination inclination optional_string
The treatment field is the temperature (in centigrade), the AF field (in mT), the impulse field strength, etc. For special experiments like IRM acquisition, the coil number of the popular ASC impulse magnetizer can be specified if the treatment steps are in volts. The position for anisotropy experiments or whether the treatment is “in-field” or in zero field also require special formatting. The units of the intensity field are in cgs and the directions are relative to the ‘lab arrow’ on the specimen. Here are some examples of commonly used specimens and conversions from field arrow to lab arrow.
Image('data_files/Figures/samples.png')
As an example, we use data from Sbarbori et al. (2009) done on a set of samples from the location “Socorro”, including AF, thermal, and thellier experimental data. These were saved in sio_af_example.dat, sio_thermal_example.dat, and sio_thellier_example.dat respectively. The lab field for the thellier experiment was 25 μT and was applied along the specimen’s Z axis (phi=0,theta=90).]
We can convert the example files into measurement formatted files with names like af_measurements.txt, etc. using the function convert.sio(). Then combine them together following the instructions for combine_magic.
help(convert.sio)
Help on function sio in module pmagpy.convert_2_magic: sio(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', samp_infile='', institution='', syn=False, syntype='', instrument='', labfield=0, phi=0, theta=0, peakfield=0, specnum=0, samp_con='1', location='unknown', lat='', lon='', noave=False, codelist='', cooling_rates='', coil='', timezone='UTC', user='') converts Scripps Institution of Oceanography measurement files to MagIC data base model 3.0 Parameters _________ magfile : input measurement file dir_path : output directory path, default "." input_dir_path : input file directory IF different from dir_path, default "" meas_file : output file measurement file name, default "measurements.txt" spec_file : output file specimen file name, default "specimens.txt" samp_file : output file sample file name, default "samples.tt" site_file : output file site file name, default "sites.txt" loc_file : output file location file name, default "locations.txt" samp_infile : output file to append to, default "" syn : if True, this is a synthetic specimen, default False syntype : sample material type, default "" instrument : instrument on which the measurements were made (e.g., "SIO-2G"), default "" labfield : lab field in microtesla for TRM, default 0 phi, theta : direction of lab field [-1,-1 for anisotropy experiments], default 0, 0 peakfield : peak af field in mT for ARM, default 0 specnum : number of terminal characters distinguishing specimen from sample, default 0 samp_con : sample/site naming convention, default '1' "1" XXXXY: where XXXX is an arbitr[ary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] "2" XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) "3" XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) "4-Z" XXXX[YYY]: YYY is sample designation with Z characters from site XXX "5" site name same as sample "6" site is entered under a separate column NOT CURRENTLY SUPPORTED "7-Z" [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY NB: all others you will have to customize your self or e-mail ltauxe@ucsd.edu for help. "8" synthetic - has no site name "9" ODP naming convention location : location name for study, default "unknown" lat : latitude of sites, default "" lon : longitude of sites, default "" noave : boolean, if False, average replicates, default False codelist : colon delimited string of lab protocols (e.g., codelist="AF"), default "" AF: af demag T: thermal including thellier but not trm acquisition S: Shaw method I: IRM (acquisition) N: NRM only TRM: trm acquisition ANI: anisotropy experiment D: double AF demag G: triple AF demag (GRM protocol) CR: cooling rate experiment. The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional) where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps. XXX.00 is optional zerofield baseline. XXX.70 is alteration check. syntax in sio_magic is: -LP CR xxx,yyy,zzz,..... xxx -A where xxx, yyy, zzz...xxx are cooling time in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70 if you use a zerofield step then no need to specify the cooling rate for the zerofield It is important to add to the command line the -A option so the measurements will not be averaged. But users need to make sure that there are no duplicate measurements in the file cooling_rates : cooling rate in K/sec for cooling rate dependence studies (K/minutes) in comma separated list for each cooling rate (e.g., "43.6,1.3,43.6") coil : 1,2, or 3 unist of IRM field in volts using ASC coil #1,2 or 3 the fast and slow experiments in comma separated string (e.g., fast: 43.6 K/min, slow: 1.3 K/min) timezone : timezone of date/time string in comment string, default "UTC" user : analyst, default "" Effects _______ creates MagIC formatted tables
convert.sio('sio_af_example.dat',dir_path='data_files/convert_2_magic/sio_magic/',
specnum=1,location='Isla Soccoro',codelist='AF',samp_con='1',
meas_file='af_measurements.txt',spec_file='af_specimens.txt',
samp_file='af_samples.txt',site_file='af_sites.txt')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 14 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/af_specimens.txt -I- 1 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/af_samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/af_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/af_measurements.txt -I- 14 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/af_measurements.txt')
convert.sio('sio_thermal_example.dat',dir_path='data_files/convert_2_magic/sio_magic/',
specnum=1,location='Isla Soccoro',codelist='T',
meas_file='thermal_measurements.txt',spec_file='thermal_specimens.txt',
samp_file='thermal_samples.txt',site_file='thermal_sites.txt')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 22 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/thermal_specimens.txt -I- 1 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/thermal_samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/thermal_sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/thermal_measurements.txt -I- 22 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/thermal_measurements.txt')
And combine them together...
# combine the measurements files
measfiles=['data_files/convert_2_magic/sio_magic/af_measurements.txt',
'data_files/convert_2_magic/sio_magic/thermal_measurements.txt']
ipmag.combine_magic(measfiles,'data_files/convert_2_magic/sio_magic/measurements.txt')
specfiles=['data_files/convert_2_magic/sio_magic/af_specimens.txt',
'data_files/convert_2_magic/sio_magic/thermal_specimens.txt']
ipmag.combine_magic(specfiles,'data_files/convert_2_magic/sio_magic/specimens.txt', magic_table='specimens')
sitefiles=['data_files/convert_2_magic/sio_magic/af_sites.txt',
'data_files/convert_2_magic/sio_magic/thermal_sites.txt']
ipmag.combine_magic(sitefiles,'data_files/convert_2_magic/sio_magic/sites.txt',magic_table='sites')
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/measurements.txt -I- 36 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/specimens.txt -I- 2 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/sites.txt -I- 1 records written to sites file
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/sio_magic/sites.txt'
[MagIC Database] [command line version]
The AGICO program SUFAR creates ascii txt files as output. convert.sufar4() will convert these to the MagIC format.
help(convert.sufar4)
Help on function sufar4 in module pmagpy.convert_2_magic: sufar4(ascfile, meas_output='measurements.txt', aniso_output='rmag_anisotropy.txt', spec_infile=None, spec_outfile='specimens.txt', samp_outfile='samples.txt', site_outfile='sites.txt', specnum=0, sample_naming_con='1', user='', locname='unknown', instrument='', static_15_position_mode=False, dir_path='.', input_dir_path='', data_model_num=3) Converts ascii files generated by SUFAR ver.4.0 to MagIC files Parameters ---------- ascfile : str input ASC file, required meas_output : str measurement output filename, default "measurements.txt" aniso_output : str anisotropy output filename, MagIC 2 only, "rmag_anisotropy.txt" spec_infile : str specimen infile, default None spec_outfile : str specimen outfile, default "specimens.txt" samp_outfile : str sample outfile, default "samples.txt" site_outfile : str site outfile, default "sites.txt" specnum : int number of characters to designate a specimen, default 0 sample_naming_con : str sample/site naming convention, default '1', see info below user : str user name, default "" locname : str location name, default "unknown" instrument : str instrument name, default "" static_15_position_mode : bool specify static 15 position mode - default False (is spinning) dir_path : str output directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" data_model_num : int MagIC data model 2 or 3, default 3 Returns -------- type - Tuple : (True or False indicating if conversion was sucessful, file name written) Info -------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.sufar4('sufar4-asc_magic_example.txt',dir_path='data_files/convert_2_magic/sufar_asc_magic/',
sample_naming_con='5',locname='U1356A')
290 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/measurements.txt bulk measurements put in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/measurements.txt 728 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/specimens.txt specimen/anisotropy info put in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/specimens.txt 148 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/samples.txt sample info put in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/samples.txt 148 records written to file /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/sites.txt site info put in /Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/sites.txt
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/sufar_asc_magic/measurements.txt')
Now we can test it out with, for example, ipmag.aniso_magic_nb()
ipmag.aniso_magic_nb(infile='data_files/convert_2_magic/sufar_asc_magic/specimens.txt')
1 saved in U1356A_s_aniso-data.png 2 saved in U1356A_s_aniso-conf.png
(True, ['U1356A_s_aniso-data.png', 'U1356A_s_aniso-conf.png'])
[MagIC Database] [command line version]
Convertions of the Thellier Tool format of Leonhardt et al., 2004 can be done with convert.tdt(). THERE IS A PROBLEM WITH THE XXX.4 TREATMENT STEP CONVERSION.
help(convert.tdt)
Help on function tdt in module pmagpy.convert_2_magic: tdt(input_dir_path, experiment_name='Thellier', meas_file_name='measurements.txt', spec_file_name='specimens.txt', samp_file_name='samples.txt', site_file_name='sites.txt', loc_file_name='locations.txt', user='', location='', lab_dec=0, lab_inc=90, moment_units='mA/m', samp_name_con='sample=specimen', samp_name_chars=0, site_name_con='site=sample', site_name_chars=0, volume=12.0, output_dir_path='') converts TDT formatted files to measurements format files Parameters ---------- input_dir_path : str directory with one or more .tdt files experiment: str one of: ["Thellier", "ATRM 6 pos", "NLT"], default "Thellier" meas_file_name : str default "measurements.txt" spec_file_name : str default "specimens.txt" samp_file_name : str default "samples.txt" site_file_name : str default "sites.txt" loc_file_name : str default "locations.txt" user : str default "" location : str default "" lab_dec: int default: 0 lab_inc: int default 90 moment_units : str must be one of: ["mA/m", "emu", "Am^2"], default "mA/m" samp_name_con : str or int {1: "sample=specimen", 2: "no. of terminate characters", 3: "character delimited"} samp_name_chars : str or int number of characters to remove for sample name, (or delimiting character), default 0 site_name_con : str or int {1: "site=sample", 2: "no. of terminate characters", 3: "character delimited"} site_name_chars : str or int number of characters to remove for site name, (or delimiting character), default 0 volume : float volume in cc, default 12 output_dir_path : str path for file output, defaults to input_dir_path Returns --------- tuple : (True if program ran else False, measurement outfile name or error message if failed)
convert.tdt('data_files/convert_2_magic/tdt_magic/')
Open file: data_files/convert_2_magic/tdt_magic/Krasa_MGH1.tdt Open file: data_files/convert_2_magic/tdt_magic/Krasa_MGH1_noAC.tdt adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 40 records written to measurements file -I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/specimens.txt -I- 1 records written to specimens file -I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/samples.txt -I- 1 records written to samples file -I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/sites.txt -I- 1 records written to sites file -I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/locations.txt -I- 1 records written to locations file -I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/measurements.txt -I- 40 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/tdt_magic/measurements.txt')
help(convert.utrecht)
Help on function utrecht in module pmagpy.convert_2_magic: utrecht(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', location='unknown', lat='', lon='', dmy_flag=False, noave=False, meas_n_orient=8, meth_code='LP-NO', specnum=1, samp_con='2', labfield=0, phi=0, theta=0) Converts Utrecht magnetometer data files to MagIC files Parameters ---------- mag_file : str input file name dir_path : str working directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" spec_file : str output specimen file name, default "specimens.txt" samp_file: str output sample file name, default "samples.txt" site_file : str output site file name, default "sites.txt" loc_file : str output location file name, default "locations.txt" append : bool append output files to existing files instead of overwrite, default False location : str location name, default "unknown" lat : float latitude, default "" lon : float longitude, default "" dmy_flag : bool default False noave : bool do not average duplicate measurements, default False (so by default, DO average) meas_n_orient : int Number of different orientations in measurement (default : 8) meth_code : str sample method codes, default "LP-NO" e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...] specnum : int number of characters to designate a specimen, default 0 samp_con : str sample/site naming convention, default '2', see info below labfield : float DC_FIELD in microTesla (default : 0) phi : float DC_PHI in degrees (default : 0) theta : float DC_THETA in degrees (default : 0) Returns ---------- type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written) Info --------- Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY
convert.utrecht('Utrecht_Example.af',dir_path='data_files/convert_2_magic/utrecht_magic',
specnum=0,samp_con='3')
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 350 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/specimens.txt -I- 25 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/samples.txt -I- 25 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/measurements.txt -I- 350 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/convert_2_magic/utrecht_magic/measurements.txt')
[Preparing for MagIC] [command line version]
orientation_magic is meant to import the field book data as entered into the format like in orientation_example.txt into the MagIC format samples, sites and location tables.
Click here for details about the orient.txt file format. The example file used here has field information for a few sites. The samples were oriented with a Pomeroy orientation device (the default) and it is desirable to calculate the magnetic declination from the IGRF at the time of sampling (also the default). Sample names follow the rule that the sample is designated by a letter at the end of the site name (convention #1 - which is also the default). We can do this from within a notebook by calling ipmag.orientation_magic().
help(ipmag.orientation_magic)
Help on function orientation_magic in module pmagpy.ipmag: orientation_magic(or_con=1, dec_correction_con=1, dec_correction=0, bed_correction=True, samp_con='1', hours_from_gmt=0, method_codes='', average_bedding=False, orient_file='orient.txt', samp_file='samples.txt', site_file='sites.txt', output_dir_path='.', input_dir_path='', append=False, data_model=3) use this function to convert tab delimited field notebook information to MagIC formatted tables (er_samples and er_sites) INPUT FORMAT Input files must be tab delimited and have in the first line: tab location_name Note: The "location_name" will facilitate searching in the MagIC database. Data from different "locations" should be put in separate files. The definition of a "location" is rather loose. Also this is the word 'tab' not a tab, which will be indicated by ' '. The second line has the names of the columns (tab delimited), e.g.: site_name sample_name mag_azimuth field_dip date lat long sample_lithology sample_type sample_class shadow_angle hhmm stratigraphic_height bedding_dip_direction bedding_dip GPS_baseline image_name image_look image_photographer participants method_codes site_description sample_description GPS_Az, sample_igsn, sample_texture, sample_cooling_rate, cooling_rate_corr, cooling_rate_mcd Notes: 1) column order doesn't matter but the NAMES do. 2) sample_name, sample_lithology, sample_type, sample_class, lat and long are required. all others are optional. 3) If subsequent data are the same (e.g., date, bedding orientation, participants, stratigraphic_height), you can leave the field blank and the program will fill in the last recorded information. BUT if you really want a blank stratigraphic_height, enter a '-1'. These will not be inherited and must be specified for each entry: image_name, look, photographer or method_codes 4) hhmm must be in the format: hh:mm and the hh must be in 24 hour time. date must be mm/dd/yy (years < 50 will be converted to 20yy and >50 will be assumed 19yy). hours_from_gmt is the number of hours to SUBTRACT from hh to get to GMT. 5) image_name, image_look and image_photographer are colon delimited lists of file name (e.g., IMG_001.jpg) image look direction and the name of the photographer respectively. If all images had same look and photographer, just enter info once. The images will be assigned to the site for which they were taken - not at the sample level. 6) participants: Names of who helped take the samples. These must be a colon delimited list. 7) method_codes: Special method codes on a sample level, e.g., SO-GT5 which means the orientation is has an uncertainty of >5 degrees for example if it broke off before orienting.... 8) GPS_Az is the place to put directly determined GPS Azimuths, using, e.g., points along the drill direction. 9) sample_cooling_rate is the cooling rate in K per Ma 10) int_corr_cooling_rate 11) cooling_rate_mcd: data adjustment method code for cooling rate correction; DA-CR-EG is educated guess; DA-CR-PS is percent estimated from pilot samples; DA-CR-TRM is comparison between 2 TRMs acquired with slow and rapid cooling rates. is the percent cooling rate factor to apply to specimens from this sample, DA-CR-XX is the method code defaults: orientation_magic(or_con=1, dec_correction_con=1, dec_correction=0, bed_correction=True, samp_con='1', hours_from_gmt=0, method_codes='', average_bedding=False, orient_file='orient.txt', samp_file='er_samples.txt', site_file='er_sites.txt', output_dir_path='.', input_dir_path='', append=False): orientation conventions: [1] Standard Pomeroy convention of azimuth and hade (degrees from vertical down) of the drill direction (field arrow). lab arrow azimuth= sample_azimuth = mag_azimuth; lab arrow dip = sample_dip =-field_dip. i.e. the lab arrow dip is minus the hade. [2] Field arrow is the strike of the plane orthogonal to the drill direction, Field dip is the hade of the drill direction. Lab arrow azimuth = mag_azimuth-90 Lab arrow dip = -field_dip [3] Lab arrow is the same as the drill direction; hade was measured in the field. Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip [4] lab azimuth and dip are same as mag_azimuth, field_dip : use this for unoriented samples too [5] Same as AZDIP convention explained below - azimuth and inclination of the drill direction are mag_azimuth and field_dip; lab arrow is as in [1] above. lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90 [6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip [7] see http://earthref.org/PmagPy/cookbook/#field_info for more information. You can customize other format yourself, or email ltauxe@ucsd.edu for help. Magnetic declination convention: [1] Use the IGRF value at the lat/long and date supplied [default] [2] Will supply declination correction [3] mag_az is already corrected in file [4] Correct mag_az but not bedding_dip_dir Sample naming convention: [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name = sample name [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXX]YYY: XXX is site designation with Z characters from samples XXXYYY NB: all others you will have to either customize your self or e-mail ltauxe@ucsd.edu for help.
We need to know which orientation convention was used to take the samples (it was with a Pomeroy, so, the default). We want to use the igrf calculated magnetic declination at each site (so dec_correction_con=1, the default). These samples were collected in Antarctica with a local time of GMT+13, so we need to subtract 13 hours so hours_from_gmt should be 13. we are using data model 3.0 for this notebook, so data_model=3. Also, input_dir_path and output_dir_path are both ../orientation_magic.
ipmag.orientation_magic(input_dir_path='data_files/orientation_magic',orient_file='orient_example.txt',
hours_from_gmt=13,data_model=3,output_dir_path='data_files/orientation_magic')
setting location name to "" setting location name to "" saving data... 24 records written to file /Users/nebula/Python/PmagPy/data_files/orientation_magic/samples.txt 2 records written to file /Users/nebula/Python/PmagPy/data_files/orientation_magic/sites.txt Data saved in /Users/nebula/Python/PmagPy/data_files/orientation_magic/samples.txt and /Users/nebula/Python/PmagPy/data_files/orientation_magic/sites.txt
(True, None)
[MagIC Database] [command line version]
Many paleomagnetists save orientation information in files in this format: Sample Azimuth Plunge Strike Dip (AZDIP format), where the Azimuth and Plunge are the declination and inclination of the drill direction and the strike and dip are the attitude of the sampled unit (with dip to the right of strike). Of course there are many ways to think about sample orientation and the MagIC database convention is to store the direction of the X coordinate of the specimen measurements. To convert an AzDip formatted file (example in data_files/azdip_magic/azdip_magic_example.dat), we can use ipmag.azdip_magic().
help(ipmag.azdip_magic)
Help on function azdip_magic in module pmagpy.ipmag: azdip_magic(orient_file='orient.txt', samp_file='samples.txt', samp_con='1', Z=1, method_codes='FS-FD', location_name='unknown', append=False, output_dir='.', input_dir='.', data_model=3) takes space delimited AzDip file and converts to MagIC formatted tables Parameters __________ orient_file : name of azdip formatted input file samp_file : name of samples.txt formatted output file samp_con : integer of sample orientation convention [1] XXXXY: where XXXX is an arbitrary length site designation and Y is the single character sample designation. e.g., TG001a is the first sample from site TG001. [default] [2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length) [3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length) [4-Z] XXXX[YYY]: YYY is sample designation with Z characters from site XXX [5] site name same as sample [6] site name entered in site_name column in the orient.txt format input file -- NOT CURRENTLY SUPPORTED [7-Z] [XXXX]YYY: XXXX is site designation with Z characters with sample name XXXXYYYY method_codes : colon delimited string with the following as desired FS-FD field sampling done with a drill FS-H field sampling done with hand samples FS-LOC-GPS field location done with GPS FS-LOC-MAP field location done with map SO-POM a Pomeroy orientation device was used SO-ASC an ASC orientation device was used SO-MAG orientation with magnetic compass location_name : location of samples append : boolean. if True, append to the output file output_dir : path to output file directory input_dir : path to input file directory data_model : MagIC data model. INPUT FORMAT Input files must be space delimited: Samp Az Dip Strike Dip Orientation convention: Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip e.g. field_dip is degrees from horizontal of drill direction Magnetic declination convention: Az is already corrected in file
The method_codes are important. If you don't specify any sample orientation method, for example, the program will assume that they are unoriented. Pick the appropriate method codes for field sampling (FS-) and sample orientation (SO-) from the lists here: https://www2.earthref.org/MagIC/method-codes
ipmag.azdip_magic(orient_file='azdip_magic_example.dat',input_dir='data_files/azdip_magic/',
output_dir='data_files/azdip_magic/', method_codes='FS-FD:SO-MAG')
916 records written to file /Users/nebula/Python/PmagPy/data_files/azdip_magic/samples.txt Data saved in /Users/nebula/Python/PmagPy/data_files/azdip_magic/samples.txt
(True, None)
Anisotropy of anhysteretic or other remanence can be converted to a tensor and used to correct natural remanence data for the effects of anisotropy remanence acquisition. For example, directions may be deflected from the geomagnetic field direction or intensities may be biased by strong anisotropies in the magnetic fabric of the specimen. By imparting an anhysteretic or thermal remanence in many specific orientations, the anisotropy of remanence acquisition can be characterized and used for correction. We do this for anisotropy of anhysteretic remanence (AARM) by imparting an ARM in 9, 12 or 15 positions. Each ARM must be preceded by an AF demagnetization step. The 15 positions are shown in the k15_magic example.
For the 9 position scheme, aarm_magic assumes that the AARMs are imparted in positions 1,2,3, 6,7,8, 11,12,13. Someone (a.k.a. Josh Feinberg) has kindly made the measurements and saved them an SIO formatted measurement file named aarm_magic_example.dat in the datafile directory called aarm_magic. Note the special format of these files - the treatment column (column #2) has the position number (1,2,3,6, etc.) followed by either a “00” for the obligatory zero field baseline step or a “10” for the in-field step. These could also be ‘0‘ and ‘1’.
We need to first import these into the measurements format and then calculate the anisotropy tensors. These can then be plotted or used to correct paleointensity or directional data for anisotropy of remanence.
So, first follow the instructions in sio_magic to import the AARM data into the MagIC format. The DC field was 50 μT, the peak AC field was 180 mT, the location was "Bushveld" and the lab protocol was AF and Anisotropy. The naming convention used Option # 3 (see help menu).
Then we need to calculate the best-fit tensor and write them out to the specimens.txt MagIC tables which can be used to correct remanence data for anisotropy.
The aarm_magic program takes a measurements.txt formatted file with anisotropy of ARM data in it and calculates the tensors, rotates it into the desired coordinate system and stores the data in a specimens.txt format file. To do this in a notebook, use ipmag.aarm_magic().
convert.sio('arm_magic_example.dat',dir_path='data_files/aarm_magic/',specnum=3,
location='Bushveld',codelist='AF:ANI',samp_con='3',
meas_file='aarm_measurements.txt',peakfield=180,labfield=50, phi=-1, theta=-1)
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 126 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/aarm_magic/specimens.txt -I- 7 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/aarm_magic/samples.txt -I- 1 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/aarm_magic/sites.txt -I- 1 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/aarm_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/aarm_magic/aarm_measurements.txt -I- 126 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/aarm_magic/aarm_measurements.txt')
help(ipmag.aarm_magic)
Help on function aarm_magic in module pmagpy.ipmag: aarm_magic(infile, dir_path='.', input_dir_path='', spec_file='specimens.txt', samp_file='samples.txt', data_model_num=3, coord='s') Converts AARM data to best-fit tensor (6 elements plus sigma) Parameters ---------- infile : str input measurement file dir_path : str output directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" spec_file : str input/output specimen file name, default "specimens.txt" samp_file : str input sample file name, default "samples.txt" data_model_num : number MagIC data model [2, 3], default 3 coord : str coordinate system specimen/geographic/tilt-corrected, ['s', 'g', 't'], default 's' Returns --------- Tuple : (True or False indicating if conversion was sucessful, output file name written) Info --------- Input for is a series of baseline, ARM pairs. The baseline should be the AF demagnetized state (3 axis demag is preferable) for the following ARM acquisition. The order of the measurements is: positions 1,2,3, 6,7,8, 11,12,13 (for 9 positions) positions 1,2,3,4, 6,7,8,9, 11,12,13,14 (for 12 positions) positions 1-15 (for 15 positions)
ipmag.aarm_magic('aarm_measurements.txt',dir_path='data_files/aarm_magic/')
7 records written to file /Users/nebula/Python/PmagPy/data_files/aarm_magic/specimens.txt specimen data stored in /Users/nebula/Python/PmagPy/data_files/aarm_magic/specimens.txt
(True, '/Users/nebula/Python/PmagPy/data_files/aarm_magic/specimens.txt')
ipmag.aniso_magic_nb(infile='data_files/aarm_magic/specimens.txt')
1 saved in _s_aniso-data.png 2 saved in _s_aniso-conf.png
(True, ['_s_aniso-data.png', '_s_aniso-conf.png'])
help(ipmag.aniso_magic_nb)
Help on function aniso_magic_nb in module pmagpy.ipmag: aniso_magic_nb(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', verbose=True, ipar=False, ihext=True, ivec=False, isite=False, iloc=False, iboot=False, vec=0, Dir=[], PDir=[], crd='s', num_bootstraps=1000, dir_path='.', fignum=1, save_plots=True, interactive=False, fmt='png') Makes plots of anisotropy eigenvectors, eigenvalues and confidence bounds All directions are on the lower hemisphere. Parameters __________ infile : specimens formatted file with aniso_s data samp_file : samples formatted file with sample => site relationship site_file : sites formatted file with site => location relationship verbose : if True, print messages to output confidence bounds options: ipar : if True - perform parametric bootstrap - requires non-blank aniso_s_sigma ihext : if True - Hext ellipses ivec : if True - plot bootstrapped eigenvectors instead of ellipses isite : if True plot by site, requires non-blank samp_file #iloc : if True plot by location, requires non-blank samp_file, and site_file NOT IMPLEMENTED iboot : if True - bootstrap ellipses vec : eigenvector for comparison with Dir Dir : [Dec,Inc] list for comparison direction PDir : [Pole_dec, Pole_Inc] for pole to plane for comparison green dots are on the lower hemisphere, cyan are on the upper hemisphere crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1, or unspecified g : geographic coordinates, aniso_tile_correction = 0 t : tilt corrected coordinates, aniso_tile_correction = 100 num_bootstraps : how many bootstraps to do, default 1000 dir_path : directory path fignum : matplotlib figure number, default 1 save_plots : bool, default True if True, create and save all requested plots interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line only) fmt : str, default "svg" format for figures, [svg, jpg, pdf, png]
[Essentials Appendix A.3.4] [command line version]
angle calculates the angle $\alpha$ between two declination,inclination pairs. It reads in the directions from the command line or from a file and calls pmag.angle() to do the calculation.
There are several ways to use this from the notebook - one loading the data into a Pandas dataframe, then convert to the desired arrays, or load directly into a Numpy array of desired shape.
help(pmag.angle)
Help on function angle in module pmagpy.pmag: angle(D1, D2) Calculate the angle between two directions. Parameters ---------- D1 : Direction 1 as an array of [declination, inclination] pair or pairs D2 : Direction 2 as an array of [declination, inclination] pair or pairs Returns ------- angle : angle between the directions as a single-element array Examples -------- >>> pmag.angle([350.0,10.0],[320.0,20.0]) array([ 30.59060998])
# Pandas way:
di=pd.read_csv('data_files/angle/angle.dat',delim_whitespace=True,header=None)
#rename column headers
di.columns=['Dec1','Inc1','Dec2','Inc2']
Here's the sort of data in the file:
di.head()
Dec1 | Inc1 | Dec2 | Inc2 | |
---|---|---|---|---|
0 | 11.2 | 32.9 | 6.4 | -42.9 |
1 | 11.5 | 63.7 | 10.5 | -55.4 |
2 | 11.9 | 31.4 | 358.1 | -71.8 |
3 | 349.6 | 36.2 | 356.3 | -45.0 |
4 | 60.3 | 63.5 | 58.9 | -56.6 |
Now we will use pmag.angle() to calculate the angles.
# call pmag.angle
pmag.angle(di[['Dec1','Inc1']].values,di[['Dec2','Inc2']].values)
array([ 75.92745193, 119.10251273, 103.65330599, 81.42586582, 120.1048559 , 100.8579262 , 95.07347774, 74.10981614, 78.41266977, 120.05285684, 114.36156914, 66.30664335, 85.38356936, 95.07546203, 93.84174 , 93.116631 , 105.39087299, 71.78167883, 104.04746653, 93.84450445, 93.29827337, 96.34377954, 90.14271929, 112.17559328, 90.06592091, 120.00493016, 75.31604123, 86.19902246, 85.85667799, 82.64834934, 115.51261896, 99.28623007, 65.9466766 , 90.55185269, 90.50418859, 84.49253198, 93.00731365, 67.47153733, 76.84279617, 83.80354 , 128.3068145 , 91.690954 , 46.87441241, 110.66917836, 103.69699188, 64.35444341, 81.94448359, 94.01817998, 121.19588845, 83.64445512, 113.72812352, 76.38276774, 113.38742874, 74.09024232, 79.42493098, 74.92842387, 90.5556631 , 91.44844861, 112.71773111, 77.26775912, 77.06338144, 62.41361128, 88.42053203, 106.29965884, 100.55759278, 143.79308212, 104.94537375, 91.83604987, 96.21780532, 85.58941479, 65.61977586, 88.64226464, 75.64540868, 93.36044834, 101.25961804, 115.14897178, 86.70974597, 92.32998728, 91.89347431, 102.39692204, 78.93051946, 93.41996659, 88.08998457, 94.50358255, 76.96036419, 110.40068516, 89.23179785, 80.90505187, 100.40590063, 91.88885371, 107.05953781, 115.8185023 , 111.2919312 , 124.61718069, 88.12341445, 66.94129884, 99.90439898, 76.73639992, 71.37398958, 100.7789606 ])
Here is the other (equally valid) way using np.loadtext().
# Numpy way:
di=np.loadtxt('data_files/angle/angle.dat').transpose() # read in file
D1=di[0:2].transpose() # assign to first array
D2=di[2:].transpose() # assign to second array
pmag.angle(D1,D2) # call pmag.angle
array([ 75.92745193, 119.10251273, 103.65330599, 81.42586582, 120.1048559 , 100.8579262 , 95.07347774, 74.10981614, 78.41266977, 120.05285684, 114.36156914, 66.30664335, 85.38356936, 95.07546203, 93.84174 , 93.116631 , 105.39087299, 71.78167883, 104.04746653, 93.84450445, 93.29827337, 96.34377954, 90.14271929, 112.17559328, 90.06592091, 120.00493016, 75.31604123, 86.19902246, 85.85667799, 82.64834934, 115.51261896, 99.28623007, 65.9466766 , 90.55185269, 90.50418859, 84.49253198, 93.00731365, 67.47153733, 76.84279617, 83.80354 , 128.3068145 , 91.690954 , 46.87441241, 110.66917836, 103.69699188, 64.35444341, 81.94448359, 94.01817998, 121.19588845, 83.64445512, 113.72812352, 76.38276774, 113.38742874, 74.09024232, 79.42493098, 74.92842387, 90.5556631 , 91.44844861, 112.71773111, 77.26775912, 77.06338144, 62.41361128, 88.42053203, 106.29965884, 100.55759278, 143.79308212, 104.94537375, 91.83604987, 96.21780532, 85.58941479, 65.61977586, 88.64226464, 75.64540868, 93.36044834, 101.25961804, 115.14897178, 86.70974597, 92.32998728, 91.89347431, 102.39692204, 78.93051946, 93.41996659, 88.08998457, 94.50358255, 76.96036419, 110.40068516, 89.23179785, 80.90505187, 100.40590063, 91.88885371, 107.05953781, 115.8185023 , 111.2919312 , 124.61718069, 88.12341445, 66.94129884, 99.90439898, 76.73639992, 71.37398958, 100.7789606 ])
You can always save your output using np.savetxt().
angles=pmag.angle(D1,D2) # assign the returned array to angles
[Essentials Chapter 13] [MagIC Database] [command_line_version]
Anisotropy data can be plotted versus depth. The program ani_depthplot uses MagIC formatted data tables. Bulk susceptibility measurements can also be plotted if they are available in a measurements.txt formatted file.
In this example, we will use the data from Tauxe et al. (2015, doi:10.1016/j.epsl.2014.12.034) measured on samples obtained during Expedition 318 of the International Ocean Drilling Program. To get the entire dataset, go to the MagIC data base at: https://www2.earthref.org/MagIC/doi/10.1016/j.epsl.2014.12.034. Download the data set and unpack it with ipmag.download_magic.
We will use the ipmag.ani_depthplot() version of this program.
help(ipmag.ani_depthplot)
Help on function ani_depthplot in module pmagpy.ipmag: ani_depthplot(spec_file='specimens.txt', samp_file='samples.txt', meas_file='measurements.txt', site_file='sites.txt', age_file='', sum_file='', fmt='svg', dmin=-1, dmax=-1, depth_scale='core_depth', dir_path='.', contribution=None) returns matplotlib figure with anisotropy data plotted against depth available depth scales: 'composite_depth', 'core_depth' or 'age' (you must provide an age file to use this option). You must provide valid specimens and sites files, and either a samples or an ages file. You may additionally provide measurements and a summary file (csv). Parameters ---------- spec_file : str, default "specimens.txt" samp_file : str, default "samples.txt" meas_file : str, default "measurements.txt" site_file : str, default "sites.txt" age_file : str, default "" sum_file : str, default "" fmt : str, default "svg" format for figures, ["svg", "jpg", "pdf", "png"] dmin : number, default -1 minimum depth to plot (if -1, default to plotting all) dmax : number, default -1 maximum depth to plot (if -1, default to plotting all) depth_scale : str, default "core_depth" scale to plot, ['composite_depth', 'core_depth', 'age']. if 'age' is selected, you must provide an ages file. dir_path : str, default "." directory for input files contribution : cb.Contribution, default None if provided, use Contribution object instead of reading in data from files Returns --------- plot : matplotlib plot, or False if no plot could be created name : figure name, or error message if no plot could be created
And here we go:
ipmag.ani_depthplot(dir_path='data_files/ani_depthplot');
[Essentials Chapter 13] [MagIC Database] [command line version]
Samples were collected from the eastern margin a dike oriented with a bedding pole declination of 110∘ and dip of 2∘. The data have been imported into a MagIC (data model 3) formatted file named dike_specimens.txt.
We will make a plot of the data using ipmag.aniso_magic_nb(), using the site parametric bootstrap option and plot out the bootstrapped eigenvectors. We will also draw on the trace of the dike.
help(ipmag.aniso_magic)
Help on function aniso_magic in module pmagpy.ipmag: aniso_magic(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', ipar=1, ihext=1, ivec=1, iplot=0, isite=1, iboot=1, vec=0, Dir=[], PDir=[], comp=0, user='', fmt='png', crd='s', verbose=True, plots=0, num_bootstraps=1000, dir_path='.', input_dir_path='')
help(ipmag.aniso_magic_nb)
Help on function aniso_magic_nb in module pmagpy.ipmag: aniso_magic_nb(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', verbose=True, ipar=False, ihext=True, ivec=False, isite=False, iloc=False, iboot=False, vec=0, Dir=[], PDir=[], crd='s', num_bootstraps=1000, dir_path='.', fignum=1, save_plots=True, interactive=False, fmt='png') Makes plots of anisotropy eigenvectors, eigenvalues and confidence bounds All directions are on the lower hemisphere. Parameters __________ infile : specimens formatted file with aniso_s data samp_file : samples formatted file with sample => site relationship site_file : sites formatted file with site => location relationship verbose : if True, print messages to output confidence bounds options: ipar : if True - perform parametric bootstrap - requires non-blank aniso_s_sigma ihext : if True - Hext ellipses ivec : if True - plot bootstrapped eigenvectors instead of ellipses isite : if True plot by site, requires non-blank samp_file #iloc : if True plot by location, requires non-blank samp_file, and site_file NOT IMPLEMENTED iboot : if True - bootstrap ellipses vec : eigenvector for comparison with Dir Dir : [Dec,Inc] list for comparison direction PDir : [Pole_dec, Pole_Inc] for pole to plane for comparison green dots are on the lower hemisphere, cyan are on the upper hemisphere crd : ['s','g','t'], coordinate system for plotting whereby: s : specimen coordinates, aniso_tile_correction = -1, or unspecified g : geographic coordinates, aniso_tile_correction = 0 t : tilt corrected coordinates, aniso_tile_correction = 100 num_bootstraps : how many bootstraps to do, default 1000 dir_path : directory path fignum : matplotlib figure number, default 1 save_plots : bool, default True if True, create and save all requested plots interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line only) fmt : str, default "svg" format for figures, [svg, jpg, pdf, png]
ipmag.aniso_magic_nb(infile='dike_specimens.txt',dir_path='data_files/aniso_magic',
iboot=1,ihext=0,ivec=1,PDir=[120,10],ipar=1, save_plots=False) # compare dike directions with plane of dike with pole of 120,10
-W- Couldn't read in samples data -I- Make sure you've provided the correct file name -W- Couldn't read in samples data -I- Make sure you've provided the correct file name desired coordinate system not available, using available: g
(True, [])
The specimen eigenvectors are plotted in the top diagram with the usual convention that squares are the V$_1$ directions, triangles are the V$_2$ directions and circles are the V$_3$ directions. All directions are plotted on the lower hemisphere. The bootstrapped eigenvectors are shown in the middle diagram. Cumulative distributions of the bootstrapped eigenvalues are shown in the bottom plot with the 95% confidence bounds plotted as vertical lines. It appears that the magma was moving in the northern and slightly up direction along the dike.
There are more options to ipmag.aniso_magic_nb() that come in handy. In particular, one often wishes to test if a particular fabric is isotropic (the three eigenvalues cannot be distinguished), or if a particular eigenvector is parallel to some direction. For example, undisturbed sedimentary fabrics are oblate (the maximum and intermediate directions cannot be distinguished from one another, but are distinct from the minimum) and the eigenvector associated with the minimum eigenvalue is vertical. These criteria can be tested using the distributions of bootstrapped eigenvalues and eigenvectors.
The following session illustrates how this is done, using the data in the test file sed_specimens.txt in the aniso_magic directory.
ipmag.aniso_magic_nb(infile='sed_specimens.txt',dir_path='data_files/aniso_magic',
iboot=1,ihext=0,ivec=1,Dir=[0,90],vec=3,ipar=1, save_plots=False) # parametric bootstrap and compare V3 with vertical
-W- Couldn't read in samples data -I- Make sure you've provided the correct file name -W- Couldn't read in samples data -I- Make sure you've provided the correct file name desired coordinate system not available, using available: g
(True, [])
The top three plots are as in the dike example before, showing a clear triaxial fabric (all three eigenvalues and associated eigenvectors are distinct from one another. In the lower three plots we have the distributions of the three components of the chosen axis, V$_3$, their 95% confidence bounds (dash lines) and the components of the designated direction (solid line). This direction is also shown in the equal area projection above as a red pentagon. The minimum eigenvector is not vertical in this case.
[Essentials Chapter 16] [command line version]
The program apwp calculates paleolatitude, declination, inclination from a pole latitude and longitude based on the paper Besse and Courtillot (2002; see Essentials Chapter 16 for complete discussion). Here we will calculate the expected direction for 100 million year old rocks at a locality in La Jolla Cove (Latitude: 33$^{\circ}$N, Longitude 117$^{\circ}$W). Assume that we are on the North American Plate! (Note that there IS no option for the Pacific plate in the program apwp, and that La Jolla was on the North American plate until a few million years ago (6?).
Within the notebook we will call pmag.apwp.
help(pmag.apwp)
Help on function apwp in module pmagpy.pmag: apwp(data, print_results=False) calculates expected pole positions and directions for given plate, location and age Parameters _________ data : [plate,lat,lon,age] plate : [NA, SA, AF, IN, EU, AU, ANT, GL] NA : North America SA : South America AF : Africa IN : India EU : Eurasia AU : Australia ANT: Antarctica GL : Greenland lat/lon : latitude/longitude in degrees N/E age : age in millions of years print_results : if True will print out nicely formatted results Returns _________ if print_results is False, [Age,Paleolat, Dec, Inc, Pole_lat, Pole_lon]
# here are the desired plate, latitude, longitude and age:
data=['NA',33,-117,100] # North American plate, lat and lon of San Diego at 100 Ma
pmag.apwp(data,print_results=True)
Age Paleolat. Dec. Inc. Pole_lat. Pole_Long. 100.0 38.8 352.4 58.1 81.5 198.3
Anisotropy of thermal remanence (ATRM) is similar to anisotropy of anhysteretic remanence (AARM) and the procedure for obtaining the tensor is also similar. Therefore, the atrm_magic is quite similar to aarm_magic. However, the SIO lab procedures for the two experiments are somewhat different. In the ATRM experiment, there is a single, zero field step at the chosen temperature which is used as a baseline. We use only six positions (as opposed to nine for AARM) because of the additional risk of alteration at each temperature step. The positions are also different:
Image('data_files/Figures/atrm_meas.png')
The file atrm_magic_example.dat in the data_files/atrm_magic directory is an SIO formatted data file containing ATRM measurement data done in a temperature of 520∘C. Note the special format of these files - the treatment column (column 2) has the temperature in centigrade followed by either a “00” for the obligatory zero field baseline step or a “10” for the first postion, and so on. These could also be ‘0‘ and ‘1’, etc..
Follow the instructions for sio_magic to import the ATRM data into the MagIC format. The DC field was 40 μT. The sample/site naming convention used option # 1 (see help menu) and the specimen and sample name are the same (specnum=0).
We will use ipmag.atrm_magic() to calculate the best-fit tensor and write out the MagIC tables which can be used to correct remanence data for the effects of remanent anisotropy.
convert.sio('atrm_magic_example.dat',dir_path='data_files/atrm_magic/',specnum=0,
location='unknown',codelist='T:ANI',samp_con='1',
meas_file='measurements.txt',labfield=40, phi=-1, theta=-1)
adding measurement column to measurements table! -I- overwriting /Users/nebula/Python/PmagPy/measurements.txt -I- 210 records written to measurements file -I- overwriting /Users/nebula/Python/PmagPy/data_files/atrm_magic/specimens.txt -I- 30 records written to specimens file -I- overwriting /Users/nebula/Python/PmagPy/data_files/atrm_magic/samples.txt -I- 30 records written to samples file -I- overwriting /Users/nebula/Python/PmagPy/data_files/atrm_magic/sites.txt -I- 10 records written to sites file -I- overwriting /Users/nebula/Python/PmagPy/data_files/atrm_magic/locations.txt -I- 1 records written to locations file -I- overwriting /Users/nebula/Python/PmagPy/data_files/atrm_magic/measurements.txt -I- 210 records written to measurements file
(True, '/Users/nebula/Python/PmagPy/data_files/atrm_magic/measurements.txt')
help(ipmag.atrm_magic)
Help on function atrm_magic in module pmagpy.ipmag: atrm_magic(meas_file, dir_path='.', input_dir_path='', input_spec_file='specimens.txt', output_spec_file='specimens.txt', data_model_num=3) Converts ATRM data to best-fit tensor (6 elements plus sigma) Parameters ---------- meas_file : str input measurement file dir_path : str output directory, default "." input_dir_path : str input file directory IF different from dir_path, default "" input_spec_file : str input specimen file name, default "specimens.txt" output_spec_file : str output specimen file name, default "specimens.txt" data_model_num : number MagIC data model [2, 3], default 3 Returns --------- Tuple : (True or False indicating if conversion was sucessful, output file name written)
ipmag.atrm_magic('measurements.txt',dir_path='data_files/atrm_magic')
30 records written to file /Users/nebula/Python/PmagPy/data_files/atrm_magic/specimens.txt specimen data stored in /Users/nebula/Python/PmagPy/data_files/atrm_magic/specimens.txt
(True, '/Users/nebula/Python/PmagPy/data_files/atrm_magic/specimens.txt')
[Essentials Chapter 2] [command line version]
b_vdm converts geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude and calculates the Virtual [Axial] Dipole Moment (vdm or vadm). We will call pmag.b_vdm() directly from within the notebook. [See also vdm_b.]
Here we use the function pmag.b_vdm() to convert an estimated paleofield value of 33 $\mu$T obtained from a lava flow at 22$^{\circ}$ N latitude to the equivalent Virtual Dipole Moment (VDM) in Am$^2$.
help(pmag.b_vdm)
Help on function b_vdm in module pmagpy.pmag: b_vdm(B, lat) Converts a magnetic field value (input in units of tesla) to a virtual dipole moment (VDM) or a virtual axial dipole moment (VADM); output in units of Am^2) Parameters ---------- B: local magnetic field strength in tesla lat: latitude of site in degrees Returns ---------- V(A)DM in units of Am^2 Examples -------- >>> pmag.b_vdm(33e-6,22)*1e-21 71.58815974511788
print ('%7.1f'%(pmag.b_vdm(33e-6,22)*1e-21),' ZAm^2')
71.6 ZAm^2
pmag.b_vdm(33e-6,22)*1e-21
71.58815974511788
[Essentials Chapter 8] [MagIC Database] [command line version]
It is often useful to plot measurements from one experiement against another. For example, rock magnetic studies of sediments often plot the IRM against the ARM or magnetic susceptibility. All of these types of measurements can be imported into a single measurements formatted file and use the MagIC method codes and other clues (lab fields, etc.) to differentiate one measurement from another.
Data were obtained by Hartl and Tauxe (1997, doi: 10.1111/j.1365-246X.1997.tb04082.x) from a Paleogene core from 28$^{\circ}$ S (DSDP Site 522) and used for a relative paleointensity study. IRM, ARM, magnetic susceptibility and remanence data were uploaded to the MagIC database. The MagIC measurements formatted file for this study (which you can get from https://earthref.org/MagIC/doi/10.1111/j.1365-246X.1997.tb04082.x and unpack with download_magic is saved in data_files/biplot_magic/measurements.txt.
We can create these plots using Pandas. The key to what the measurements mean is in the Magic method codes, so we can first get a unique list of all the available method_codes, then plot the ones we are interested in against each other. Let's read in the data file in to a Pandas DataFrame and exctract the method codes to see what we have:
# read in the data
meas_df=pd.read_csv('data_files/biplot_magic/measurements.txt',sep='\t',header=1)
# get the method_codes and print
print(meas_df.method_codes.unique())
# take a look at the top part of the measurements data frame
meas_df.head()
['LT-AF-Z' 'LT-AF-I' 'LT-IRM' 'LP-X']
citations | dir_dec | dir_inc | experiment | magn_mass | meas_temp | measurement | method_codes | quality | specimen | standard | susc_chi_mass | treat_ac_field | treat_dc_field | treat_step_num | treat_temp | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | This study | 268.5 | -41.2 | 15-1-013:LP-AF-DIR | 0.000003 | 300 | 15-1-013:LP-AF-DIR-1 | LT-AF-Z | g | 15-1-013 | u | NaN | 0.015 | 0.00000 | 1.0 | 300 |
1 | This study | NaN | NaN | 15-1-013:LP-ARM | 0.000179 | 300 | 15-1-013:LP-ARM-2 | LT-AF-I | g | 15-1-013 | u | NaN | 0.080 | 0.00005 | 2.0 | 300 |
2 | This study | NaN | NaN | 15-1-013:LP-IRM | 0.003600 | 300 | 15-1-013:LP-IRM-3 | LT-IRM | g | 15-1-013 | u | NaN | 0.000 | 1.00000 | 3.0 | 300 |
3 | This study | NaN | NaN | 15-1-013:LP-X | NaN | 300 | 15-1-013:LP-X-4 | LP-X | NaN | 15-1-013 | NaN | 2.380000e-07 | 0.010 | 0.00000 | 4.0 | 300 |
4 | This study | 181.0 | 68.6 | 15-1-022:LP-AF-DIR | 0.000011 | 300 | 15-1-022:LP-AF-DIR-5 | LT-AF-Z | g | 15-1-022 | u | NaN | 0.015 | 0.00000 | 5.0 | 300 |
These are: an AF demag step (LT-AF-Z), an ARM (LT-AF-I), an IRM (LT-IRM) and a susceptibility (LP-X). Now we can fish out data for each method, merge them by specimen, dropping any missing measurements and finally plot one against the other.
# get the IRM data
IRM=meas_df[meas_df.method_codes.str.contains('LT-IRM')]
IRM=IRM[['specimen','magn_mass']] #trim the data frame
IRM.columns=['specimen','IRM'] # rename the column
# do the same for the ARM data
ARM=meas_df[meas_df.method_codes.str.contains('LT-AF-I')]
ARM=ARM[['specimen','magn_mass']]
ARM.columns=['specimen','ARM']
# and the magnetic susceptibility
CHI=meas_df[meas_df.method_codes.str.contains('LP-X')]
CHI=CHI[['specimen','susc_chi_mass']]
CHI.columns=['specimen','CHI']
# merge IRM ARM data by specimen
RMRMs=pd.merge(IRM,ARM,on='specimen')
# add on the susceptility data
RMRMs=pd.merge(RMRMs,CHI,on='specimen')
Now we are ready to make the plots.
fig=plt.figure(1, (12,4)) # make a figure
fig.add_subplot(131) # make the first in a row of three subplots
plt.plot(RMRMs.IRM,RMRMs.ARM,'ro',markeredgecolor='black')
plt.xlabel('IRM (Am$^2$/kg)') # label the X axis
plt.ylabel('ARM (Am$^2$/kg)') # and the Y axis
fig.add_subplot(132)# make the second in a row of three subplots
plt.plot(RMRMs.IRM,RMRMs.CHI,'ro',markeredgecolor='black')
plt.xlabel('IRM (Am$^2$/kg)')
plt.ylabel('$\chi$ (m$^3$/kg)')
fig.add_subplot(133)# and the third in a row of three subplots
plt.plot(RMRMs.ARM,RMRMs.CHI,'ro',markeredgecolor='black')
plt.xlabel('$\chi$ (m$^3$/kg)')
plt.ylabel('IRM (Am$^2$/kg)');
[Essentials Chapter 13] [command line version]
bootams calculates bootstrap statistics for anisotropy tensor data in the form of:
x11 x22 x33 x12 x23 x13
It does this by selecting para-data sets and calculating the Hext average eigenparameters. It has an optional parametric bootstrap whereby the $\sigma$ for the data set as a whole is used to draw new para data sets. The bootstrapped eigenparameters are assumed to be Kent distributed and the program calculates Kent error ellipses for each set of eigenvectors. It also estimates the standard deviations of the bootstrapped eigenvalues.
bootams reads in a file with data for the six tensor elements (x11 x22 x33 x12 x23 x13) for specimens, calls pmag.s_boot() using a parametric or non-parametric bootstrap as desired. If all that is desired is the bootstrapped eigenparameters, pmag.s_boot() has all we need, but if the Kent ellipses are required, and we can call pmag.sbootpars() to calculated these more derived products and print them out.
Note that every time the bootstrap program gets called, the output will be slightly different because this depends on calls to random number generators. If the answers are different by a lot, then the number of bootstrap calculations is too low. The number of bootstraps can be changed with the nb option below.
We can do all this from within the notebook as follows:
help(pmag.s_boot)
Help on function s_boot in module pmagpy.pmag: s_boot(Ss, ipar=0, nb=1000) Returns bootstrap parameters for S data Parameters __________ Ss : nested array of [[x11 x22 x33 x12 x23 x13],....] data ipar : if True, do a parametric bootstrap nb : number of bootstraps Returns ________ Tmean : average eigenvalues Vmean : average eigvectors Taus : bootstrapped eigenvalues Vs : bootstrapped eigenvectors
So we will:
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc 0.33505 0.00022 5.3 14.7 10.3 259.9 40.4 13.5 111.0 45.1 0.33334 0.00021 124.5 61.7 6.1 223.9 5.0 17.5 316.7 28.9 0.33161 0.00015 268.8 23.6 10.8 359.6 7.2 12.9 105.3 64.9
# with parametric bootstrap:
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss,ipar=1,nb=5000) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc 0.33505 0.00020 5.3 14.7 10.1 266.9 29.3 22.8 116.9 57.0 0.33334 0.00023 124.5 61.7 19.2 239.2 12.4 23.8 334.7 23.4 0.33161 0.00026 268.8 23.6 10.4 7.6 20.3 20.3 135.1 58.8
[Essentials Chapter 2] [command line version]
cart_dir converts cartesian coordinates (X,Y,Z) to polar coordinates (Declination, Inclination, Intensity). We will call pmag.cart2dir().
help(pmag.cart2dir)
Help on function cart2dir in module pmagpy.pmag: cart2dir(cart) Converts a direction in cartesian coordinates into declination, inclinations Parameters ---------- cart : input list of [x,y,z] or list of lists [[x1,y1,z1],[x2,y2,z2]...] Returns ------- direction_array : returns an array of [declination, inclination, intensity] Examples -------- >>> pmag.cart2dir([0,1,0]) array([ 90., 0., 1.])
# read in data file from example file
cart=np.loadtxt('data_files/cart_dir/cart_dir_example.dat')
print ('Input: \n',cart) # print out the cartesian coordinates
# print out the results
dirs = pmag.cart2dir(cart)
print ("Output: ")
for d in dirs:
print ('%7.1f %7.1f %8.3e'%(d[0],d[1],d[2]))
Input: [[ 0.3971 -0.1445 0.9063] [-0.5722 0.04 -0.8192]] Output: 340.0 65.0 1.000e+00 176.0 -55.0 1.000e+00
[Essentials Chapter 8] [MagIC Database] [command line version]
It is sometimes useful to measure susceptibility as a function of temperature, applied field and frequency. Here we use a data set that came from the Tiva Canyon Tuff sequence (see Jackson et al., 2006, doi: 10.1029/2006JB004514).
chi_magic reads in a MagIC formatted file and makes various plots. We do this using Pandas.
# with ipmag
ipmag.chi_magic('data_files/chi_magic/measurements.txt', save_plots=False)
Not enough data to plot IRM-Kappa-2352
(True, [])
# read in data from data model 3 example file using pandas
chi_data=pd.read_csv('data_files/chi_magic/measurements.txt',sep='\t',header=1)
print (chi_data.columns)
# get arrays of available temps, frequencies and fields
Ts=np.sort(chi_data.meas_temp.unique())
Fs=np.sort(chi_data.meas_freq.unique())
Bs=np.sort(chi_data.meas_field_ac.unique())
Index(['experiment', 'specimen', 'measurement', 'treat_step_num', 'citations', 'instrument_codes', 'method_codes', 'meas_field_ac', 'meas_freq', 'meas_temp', 'timestamp', 'susc_chi_qdr_volume', 'susc_chi_volume'], dtype='object')
# plot chi versus temperature at constant field
b=Bs.max()
for f in Fs:
this_f=chi_data[chi_data.meas_freq==f]
this_f=this_f[this_f.meas_field_ac==b]
plt.plot(this_f.meas_temp,1e6*this_f.susc_chi_volume,label='%i'%(f)+' Hz')
plt.legend()
plt.xlabel('Temperature (K)')
plt.ylabel('$\chi$ ($\mu$SI)')
plt.title('B = '+'%7.2e'%(b)+ ' T')
Text(0.5,1,'B = 3.00e-04 T')
# plot chi versus frequency at constant B
b=Bs.max()
t=Ts.min()
this_t=chi_data[chi_data.meas_temp==t]
this_t=this_t[this_t.meas_field_ac==b]
plt.semilogx(this_t.meas_freq,1e6*this_t.susc_chi_volume,label='%i'%(t)+' K')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('$\chi$ ($\mu$SI)')
plt.title('B = '+'%7.2e'%(b)+ ' T')
Text(0.5,1,'B = 3.00e-04 T')
You can see the dependence on temperature, frequency and applied field. These data support the suggestion that there is a strong superparamagnetic component in these specimens.
[Essentials Chapter 12] [command line version]
Most paleomagnetists use some form of Fisher Statistics to decide if two directions are statistically distinct or not (see Essentials Chapter 11 for a discussion of those techniques. But often directional data are not Fisher distributed and the parametric approach will give misleading answers. In these cases, one can use a boostrap approach, described in detail in [Essentials Chapter 12]. The program common_mean can be used for a bootstrap test for common mean to check whether two declination, inclination data sets have a common mean at the 95% level of confidence.
We want to compare the two data sets: common_mean_ex_file1.dat and common_mean_ex_file2.dat. But first, let’s look at the data in equal area projection using the methods outline in the section on eqarea.
directions_A=np.loadtxt('data_files/common_mean/common_mean_ex_file1.dat')
directions_B=np.loadtxt('data_files/common_mean/common_mean_ex_file2.dat')
ipmag.plot_net(1)
ipmag.plot_di(di_block=directions_A,color='red')
ipmag.plot_di(di_block=directions_B,color='blue')
Now let’s look at the common mean problem using ipmag.common_mean_bootstrap().
help(ipmag.common_mean_bootstrap)
Help on function common_mean_bootstrap in module pmagpy.ipmag: common_mean_bootstrap(Data1, Data2, NumSims=1000, save=False, save_folder='.', fmt='svg', figsize=(7, 2.3), x_tick_bins=4) Conduct a bootstrap test (Tauxe, 2010) for a common mean on two declination, inclination data sets. Plots are generated of the cumulative distributions of the Cartesian coordinates of the means of the pseudo-samples (one for x, one for y and one for z). If the 95 percent confidence bounds for each component overlap, the two directions are not significantly different. Parameters ---------- Data1 : a nested list of directional data [dec,inc] (a di_block) Data2 : a nested list of directional data [dec,inc] (a di_block) if Data2 is length of 1, treat as single direction NumSims : number of bootstrap samples (default is 1000) save : optional save of plots (default is False) save_folder : path to directory where plots should be saved fmt : format of figures to be saved (default is 'svg') figsize : optionally adjust figure size (default is (7, 2.3)) x_tick_bins : because they occasionally overlap depending on the data, this argument allows you adjust number of tick marks on the x axis of graphs (default is 4) Returns ------- three plots : cumulative distributions of the X, Y, Z of bootstrapped means Examples -------- Develop two populations of directions using ``ipmag.fishrot``. Use the function to determine if they share a common mean (through visual inspection of resulting plots). >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60) >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57) >>> ipmag.common_mean_bootstrap(directions_A, directions_B)
ipmag.common_mean_bootstrap(directions_A,directions_B,figsize=(9,3))
These suggest that the two data sets share a common mean.
Now compare the data in common_mean_ex_file1.dat with the expected direction at the 5$^{\circ}$ N latitude that these data were collected (Dec=0, Inc=9.9).
To do this, we set the second data set to be the desired direction for comparison.
comp_dir=[0,9.9]
ipmag.common_mean_bootstrap(directions_A,comp_dir,figsize=(9,3))
Apparently the data (cumulative distribution functions) are entirely consistent with the expected direction (dashed lines are the cartesian coordinates of that).
[Essentials Chapter 16] [command line version]
We can make an orthographic projection with latitude = -20$^{\circ}$ and longitude = 0$^{\circ}$ at the center of the African and South American continents reconstructed to 180 Ma using the Torsvik et al. (2008, doi: 10.1029/2007RG000227) poles of finite rotation. We would do this by first holding Africa fixed.
We need to read in in the outlines of continents from continents.get_cont(), rotate them around a rotation pole and angle as specified by the age and continent in question (from frp.get_pole() using pmag.pt_rot(). Then we can plot them using pmagplotlib.plot_map(). If the Basemap version is preferred, use pmagplotlib.plot_map_basemap(). Here we demonstrate this from within the notebook by just calling the PmagPy functions.
# load in the continents module
import pmagpy.continents as continents
import pmagpy.frp as frp
help(continents.get_continent)
Help on function get_continent in module pmagpy.continents: get_continent(continent) get_continent(continent) returns the outlines of specified continent. Parameters: ____________________ continent: af : Africa congo : Congo kala : Kalahari aus : Australia balt : Baltica eur : Eurasia ind : India sam : South America ant : Antarctica grn : Greenland lau : Laurentia nam : North America gond : Gondawanaland Returns : array of [lat/long] points defining continent
help(pmagplotlib.plot_map)
Help on function plot_map in module pmagpy.pmagplotlib: plot_map(fignum, lats, lons, Opts) makes a cartopy map with lats/lons Requires installation of cartopy Parameters: _______________ fignum : matplotlib figure number lats : array or list of latitudes lons : array or list of longitudes Opts : dictionary of plotting options: Opts.keys= proj : projection [supported cartopy projections: pc = Plate Carree aea = Albers Equal Area aeqd = Azimuthal Equidistant lcc = Lambert Conformal lcyl = Lambert Cylindrical merc = Mercator mill = Miller Cylindrical moll = Mollweide [default] ortho = Orthographic robin = Robinson sinu = Sinusoidal stere = Stereographic tmerc = Transverse Mercator utm = UTM [set zone and south keys in Opts] laea = Lambert Azimuthal Equal Area geos = Geostationary npstere = North-Polar Stereographic spstere = South-Polar Stereographic latmin : minimum latitude for plot latmax : maximum latitude for plot lonmin : minimum longitude for plot lonmax : maximum longitude lat_0 : central latitude lon_0 : central longitude sym : matplotlib symbol symsize : symbol size in pts edge : markeredgecolor cmap : matplotlib color map res : resolution [c,l,i,h] for low/crude, intermediate, high boundinglat : bounding latitude sym : matplotlib symbol for plotting symsize : matplotlib symbol size for plotting names : list of names for lats/lons (if empty, none will be plotted) pltgrd : if True, put on grid lines padlat : padding of latitudes padlon : padding of longitudes gridspace : grid line spacing global : global projection [default is True] oceancolor : 'azure' landcolor : 'bisque' [choose any of the valid color names for matplotlib see https://matplotlib.org/examples/color/named_colors.html details : dictionary with keys: coasts : if True, plot coastlines rivers : if True, plot rivers states : if True, plot states countries : if True, plot countries ocean : if True, plot ocean fancy : if True, plot etopo 20 grid NB: etopo must be installed if Opts keys not set :these are the defaults: Opts={'latmin':-90,'latmax':90,'lonmin':0,'lonmax':360,'lat_0':0,'lon_0':0,'proj':'moll','sym':'ro,'symsize':5,'edge':'black','pltgrid':1,'res':'c','boundinglat':0.,'padlon':0,'padlat':0,'gridspace':30,'details':all False,'edge':None,'cmap':'jet','fancy':0,'zone':'','south':False,'oceancolor':'azure','landcolor':'bisque'}
# retrieve continental outline
# This is the version that uses cartopy and requires installation of cartopy
af=continents.get_continent('af').transpose()
sam=continents.get_continent('sam').transpose()
#define options for pmagplotlib.plot_map
plt.figure(1,(5,5))
Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360., 'lat_0': -20, \
'lon_0': 345,'proj': 'ortho', 'sym': 'r-', 'symsize': 3,\
'pltgrid': 0, 'res': 'c', 'boundinglat': 0.}
if has_cartopy:
pmagplotlib.plot_map(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map(1,sam[0],sam[1],Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map_basemap(1,sam[0],sam[1],Opts)
Now for the rotation part. These are in a function called frp.get_pole()
help(frp.get_pole)
Help on function get_pole in module pmagpy.frp: get_pole(continent, age) returns rotation poles and angles for specified continents and ages assumes fixed Africa. Parameters __________ continent : aus : Australia eur : Eurasia mad : Madacascar [nwaf,congo] : NW Africa [choose one] col : Colombia grn : Greenland nam : North America par : Paraguay eant : East Antarctica ind : India [neaf,kala] : NE Africa [choose one] [sac,sam] : South America [choose one] ib : Iberia saf : South Africa Returns _______ [pole longitude, pole latitude, rotation angle] : for the continent at specified age
# get the rotation pole for south america relative to South Africa at 180 Ma
sam_pole=frp.get_pole('sam',180)
# NB: for african rotations, first rotate other continents to fixed Africa, then
# rotate with South African pole (saf)
The rotation is done by pmag.pt_rot().
help(pmag.pt_rot)
Help on function pt_rot in module pmagpy.pmag: pt_rot(EP, Lats, Lons) Rotates points on a globe by an Euler pole rotation using method of Cox and Hart 1986, box 7-3. Parameters ---------- EP : Euler pole list [lat,lon,angle] Lats : list of latitudes of points to be rotated Lons : list of longitudes of points to be rotated Returns _________ RLats : rotated latitudes RLons : rotated longitudes
so here we go...
plt.figure(1,(5,5))
sam_rot=pmag.pt_rot(sam_pole,sam[0],sam[1]) # same for south america
# and plot 'em
Opts['sym']='r-'
if has_cartopy:
pmagplotlib.plot_map(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map(1,sam_rot[0],sam_rot[1],Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map_basemap(1,sam_rot[0],sam_rot[1],Opts)
[Essentials Chapter 15] [command line version]
The program core_depthplot can be used to plot various measurement data versus sample depth. The data must be in the MagIC data format. The program will plot whole core data, discrete sample at a bulk demagnetization step, data from vector demagnetization experiments, and so on.
We can try this out on some data from DSDP Hole 522, (drilled at 26S/5W) and measured by Tauxe and Hartl (1997, doi: 10.1111/j.1365-246X.1997.tb04082.x). These were downloaded and unpacked in the biplot_magic example. More of the data are in the directory ../data_files/core_depthplot.
In this example, we will plot the alternating field (AF) data after the 15 mT step. The magnetizations will be plotted on a log scale and, as this is a record of the Oligocene, we will plot the Oligocene time scale, using the calibration of Gradstein et al. (2012), commonly referred to as “GTS12” for the the Oligocene. We are only interested in the data between 50 and 150 meters and we are not interested in the declinations here.
All this can be done using the wonders of Pandas data frames using the data in the data_files/core_depthplot directory.
Let's do things this way:
specimens=pd.read_csv('data_files/core_depthplot/specimens.txt',sep='\t',header=1)
sites=pd.read_csv('data_files/core_depthplot/sites.txt',sep='\t',header=1)
specimens=specimens.dropna(subset=['dir_inc']) # kill unwanted lines with duplicate or irrelevent info
specimens['site']=specimens['specimen'] # make a column with site name
data=pd.merge(specimens,sites,on='site') # merge the two data frames on site
data=data[data.core_depth>50] # all levels > 50
data=data[data.core_depth<150] # and < 150
lat=26 # we need this for the GAD INC
Plot versus core_depth
fig=plt.figure(1,(6,12)) # make the figure
ax=fig.add_subplot(121) # make the first of 2 subplots
plt.ylabel('Depth (m)') # label the Y axis
plt.plot(data.dir_inc,data.core_depth,'k-') # draw on a black line through the data
# draw the data points as cyan dots with black edges
plt.plot(data.dir_inc,data.core_depth,'co',markeredgecolor='black')
plt.title('Inclinations') # put on a title
plt.axvline(0,color='black')# make a central line at inc=0
plt.ylim(150,50) # set the plot Y limits to the desired depths
fig.add_subplot(122) # make the second of two subplots
# plot intensity data on semi-log plot
plt.semilogx(data.int_rel/data.int_rel.mean(),data.core_depth,'k-')
plt.semilogx(data.int_rel/data.int_rel.mean(),\
data.core_depth,'co',markeredgecolor='black')
plt.ylim(150,50)
plt.title('Relative Intensity');
And now versus age:
fig=plt.figure(1,(9,12)) # make the figure
ax=fig.add_subplot(131) # make the first of three subplots
pmagplotlib.plot_ts(ax,23,34,timescale='gts12') # plot on the time scale
fig.add_subplot(132) # make the second of three subplots
plt.plot(data.dir_inc,data.core_depth,'k-')
plt.plot(data.dir_inc,data.core_depth,'co',markeredgecolor='black')
plt.ylim(35,23)
# calculate the geocentric axial dipole field for the site latitude
gad=np.degrees(np.arctan(2.*np.tan(np.radians(lat)))) # tan (I) = 2 tan (lat)
# put it on the plot as a green dashed line
plt.axvline(gad,color='green',linestyle='dashed',linewidth=2)
plt.axvline(-gad,color='green',linestyle='dashed',linewidth=2)
plt.title('Inclinations')
plt.ylim(150,50)
fig.add_subplot(133) # make the third of three plots
# plot the intensity data on semi-log plot
plt.semilogx(data.int_rel/data.int_rel.mean(),data.core_depth,'k-')
plt.semilogx(data.int_rel/data.int_rel.mean(),data.core_depth,'co',markeredgecolor='black')
plt.ylim(150,50)
plt.title('Relative Intensity');
[Essentials Chapter 6] [command line version]
Curie Temperature experiments, saved in MagIC formatted files, can be plotted using ipmag.curie().
help(ipmag.curie)
Help on function curie in module pmagpy.ipmag: curie(path_to_file='.', file_name='', magic=False, window_length=3, save=False, save_folder='.', fmt='svg', t_begin='', t_end='') Plots and interprets curie temperature data. *** The 1st derivative is calculated from smoothed M-T curve (convolution with trianfular window with width= <-w> degrees) *** The 2nd derivative is calculated from smoothed 1st derivative curve (using the same sliding window width) *** The estimated curie temp. is the maximum of the 2nd derivative. Temperature steps should be in multiples of 1.0 degrees. Parameters __________ file_name : name of file to be opened Optional Parameters (defaults are used if not specified) ---------- path_to_file : path to directory that contains file (default is current directory, '.') window_length : dimension of smoothing window (input to smooth() function) save : boolean argument to save plots (default is False) save_folder : relative directory where plots will be saved (default is current directory, '.') fmt : format of saved figures t_begin: start of truncated window for search t_end: end of truncated window for search magic : True if MagIC formated measurements.txt file
ipmag.curie(path_to_file='data_files/curie',file_name='curie_example.dat',\
window_length=10)
second derivative maximum is at T=552
[Essentials Chapter 5] [command line version]
The program dayplot_magic makes Day (Day et al., 1977), or Squareness-Coercivity and Squareness-Coercivity of Remanence plots (e.g., Tauxe et al., 2002) from the MagIC formatted data. To do this, we will call ipmag.dayplot_magic().
help(ipmag.dayplot_magic)
Help on function dayplot_magic in module pmagpy.ipmag: dayplot_magic(path_to_file='.', hyst_file='specimens.txt', rem_file='', save=True, save_folder='.', fmt='svg', data_model=3, interactive=False, contribution=None) Makes 'day plots' (Day et al. 1977) and squareness/coercivity plots (Neel, 1955; plots after Tauxe et al., 2002); plots 'linear mixing' curve from Dunlop and Carter-Stiglitz (2006). Optional Parameters (defaults are used if not specified) ---------- path_to_file : path to directory that contains files (default is current directory, '.') the default input file is 'specimens.txt' (data_model=3 if data_model = 2, then must these are the defaults: hyst_file : hysteresis file (default is 'rmag_hysteresis.txt') rem_file : remanence file (default is 'rmag_remanence.txt') save : boolean argument to save plots (default is True) save_folder : relative directory where plots will be saved (default is current directory, '.') fmt : format of saved figures (default is 'pdf')
ipmag.dayplot_magic(path_to_file='data_files/dayplot_magic',hyst_file='specimens.txt', save=False)
-W- Couldn't read in samples data -I- Make sure you've provided the correct file name -W- Couldn't read in samples data -I- Make sure you've provided the correct file name
(True, [])
[Essentials Appendix B] [command line version]
Paleomagnetic data are frequently plotted in equal area projection. PmagPy has several plotting options which do this (e.g., eqarea, but occasionally it is handy to be able to convert the directions to X,Y coordinates directly, without plotting them at all. Here is an example transcript of a session using the datafile di_eq_example.dat:
The program di_eq program calls pmag.dimap() which we can do from within a Jupyter notebook.
help(pmag.dimap)
Help on function dimap in module pmagpy.pmag: dimap(D, I) Function to map directions to x,y pairs in equal area projection Parameters ---------- D : list or array of declinations (as float) I : list or array or inclinations (as float) Returns ------- XY : x, y values of directions for equal area projection [x,y]
DIs=np.loadtxt('data_files/di_eq/di_eq_example.dat').transpose() # load in the data
print (pmag.dimap(DIs[0],DIs[1])) # call the function
[[-0.23941025 -0.8934912 ] [ 0.43641303 0.71216134] [ 0.06384422 0.76030049] [ 0.32144709 0.68621606] [ 0.32271993 0.67056248] [ 0.40741223 0.54065429] [ 0.5801562 0.34037562] [ 0.10535089 0.65772758] [ 0.24717308 0.59968683] [ 0.18234908 0.61560016] [ 0.17481507 0.60171742] [ 0.282746 0.54547233] [ 0.26486315 0.53827299] [ 0.23575838 0.5345358 ] [ 0.29066509 0.50548208] [ 0.26062905 0.51151332] [ 0.23208983 0.51642328] [ 0.24444839 0.50566578] [ 0.27792652 0.46438138] [ 0.2505103 0.47715181] [ 0.29177004 0.44081644] [ 0.10876949 0.51614821] [ 0.19670646 0.48201446] [ 0.34938995 0.38129223] [ 0.1684068 0.47556614] [ 0.20628586 0.44644351] [ 0.17570082 0.45064929] [ 0.30110381 0.37853937] [ 0.20495497 0.42396971] [ 0.19975473 0.4225844 ] [ 0.34691999 0.30800998] [ 0.11902989 0.44114437] [ 0.23984794 0.37648585] [ 0.26952843 0.34250954] [ 0.08545091 0.42378931] [ 0.19222399 0.38723272] [ 0.17260777 0.39508358] [ 0.27200846 0.32074137] [ 0.39398077 0.11745077] [-0.01772645 0.40600235] [ 0.15427268 0.36700021] [ 0.21390276 0.33576007] [ 0.10322076 0.37220205] [ 0.23183349 0.28324518] [ 0.07216042 0.35153814] [ 0.00780196 0.31923589] [ 0.15258303 0.26535002] [ 0.24813332 0.13641245]]
[Essentials Chapter 9] and Changing coordinate systems [command line version]
Here we will convert D = 8.1,I = 45.2 from specimen coordinates to geographic adjusted coordinates. The orientation of laboratory arrow on the specimen was: azimuth = 347; plunge = 27. To do this we will call pmag.dogeo(). There is also pmag.dogeo_V for arrays of data.
So let's start with pmag.dogeo().
help(pmag.dogeo)
Help on function dogeo in module pmagpy.pmag: dogeo(dec, inc, az, pl) Rotates declination and inclination into geographic coordinates using the azimuth and plunge of the X direction (lab arrow) of a specimen. Parameters ---------- dec : declination in specimen coordinates inc : inclination in specimen coordinates Returns ------- rotated_direction : tuple of declination, inclination in geographic coordinates Examples -------- >>> pmag.dogeo(0.0,90.0,0.0,45.5) (180.0, 44.5)
pmag.dogeo(dec=81,inc=45.2,az=347,pl=27)
(94.83548541337562, 43.02168490109632)
Now let's check out the version that takes many data points at once.
help(pmag.dogeo_V)
Help on function dogeo_V in module pmagpy.pmag: dogeo_V(indat) Rotates declination and inclination into geographic coordinates using the azimuth and plunge of the X direction (lab arrow) of a specimen. Parameters ---------- indat: nested list of [dec, inc, az, pl] data Returns ------- rotated_directions : arrays of Declinations and Inclinations
indata=np.loadtxt('data_files/di_geo/di_geo_example.dat')
print (indata)
[[ 288.1 35.8 67. -36. ] [ 256.8 22.5 84. -81. ] [ 262.4 19.1 91. -48. ] [ 258.6 19.6 89. -61. ] [ 259.9 54.7 49. -76. ] [ 279.1 27.9 62. -41. ] [ 228.3 -47.5 141. -84. ] [ 249.8 25. 60. -82. ] [ 239.8 -33.9 108. -91. ] [ 271.7 50.8 28. -52. ] [ 266.8 67.1 16. -67. ] [ 238.9 51.9 27. -76. ] [ 238.9 55.3 17. -90. ] [ 252.6 41. 43. -73. ] [ 112.7 17.1 282.6 -78. ] [ 134.9 -8.9 234. -56. ] [ 138.6 -1.1 244.6 -73. ] [ 83.5 31.1 292. -28. ] [ 151.1 -35.2 196.6 -69. ] [ 146.8 -14.5 217. -51. ] [ 13.8 35. 332.6 -44. ] [ 293.1 3.9 53.5 -25.5] [ 99.5 -11. 243.6 -30. ] [ 267.8 -12.7 91.5 -49. ] [ 47. 12.8 298.6 -28. ] [ 45.8 -9. 297. -33.5] [ 81.7 -26.8 254.6 -51. ] [ 79.7 -25.7 256. -60. ] [ 84.7 -20.9 256.6 -60. ] [ 303.3 66.7 3.6 -71.5] [ 104.6 32.2 297. -100.5] [ 262.8 77.9 357.1 -87. ] [ 63.3 53.2 316. -63. ] [ 37.7 60.1 331.6 -57. ] [ 109.3 5.4 255.6 -58.5] [ 119.3 5.5 252.6 -52. ] [ 108.7 23.6 287.6 -79. ]]
Let's take a look at these data in equal area projection: (see eqarea for details)
ipmag.plot_net(1)
ipmag.plot_di(dec=indata.transpose()[0],inc=indata.transpose()[1],color='red',edge='black')
The data are highly scattered and we hope that the geographic coordinate system looks better! To find out try:
decs,incs=pmag.dogeo_V(indata)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='red',edge='black')
These data are clearly much better grouped.
And here they are printed out.
print (np.column_stack([decs,incs]))
[[ 1.23907966e+01 1.89735424e+01] [ 1.49830732e+01 1.55593373e+01] [ 1.06667819e+01 1.81693342e+01] [ 1.14047553e+01 1.89951632e+01] [ 1.24483163e+01 1.72036203e+01] [ 3.57299071e+02 1.51561580e+01] [ 3.53883281e+02 2.17091208e+01] [ 3.53789196e+02 2.16365727e+01] [ 3.40503777e+02 2.52889275e+01] [ 3.42563974e+02 2.75374519e+01] [ 3.51164668e+02 2.23293805e+01] [ 3.49415385e+02 2.99754627e+01] [ 3.46335983e+02 1.71006907e+01] [ 3.50937970e+02 2.40567015e+01] [ 3.59146910e+02 2.49558990e+01] [ 5.20812064e-01 2.94481211e+01] [ 3.54368265e+02 4.53644133e+01] [ 9.11626301e-01 2.42403293e+01] [ 3.50170459e+02 2.74704564e+01] [ 3.54249362e-02 2.81645605e+01] [ 3.43981389e+02 -8.04836591e+00] [ 3.46130907e+02 -6.14959601e+00] [ 3.47283278e+02 -4.83219850e+00] [ 3.50443170e+02 -6.65953274e+00] [ 3.44495997e+02 -6.69629260e+00] [ 3.52433892e+02 -3.06972914e+01] [ 1.55709734e+00 -2.25743459e+01] [ 4.40491709e+00 -2.08767482e+01] [ 2.54671945e+00 -1.46610862e+01] [ 3.44221055e+02 4.90397368e+00] [ 3.52498530e+02 6.46629212e+00] [ 3.45060173e+02 4.43967268e+00] [ 3.48635524e+02 7.10612965e+00] [ 3.49534584e+02 8.12663013e+00] [ 3.51173216e+02 1.92524262e+01] [ 3.57092897e+02 2.62872553e+01] [ 3.56384865e+02 2.13946656e+01]]
[Essentials Chapter 11] [command line version]
di_rot rotates dec inc pairs to a new origin. We can call pmag.dodirot() for single [dec,inc,Dbar,Ibar] data or pmag.dodirot_V() for an array of Dec, Inc pairs. We can use the data from the di_geo example and rotate the geographic coordinate data such that the center of the distribution is the principal direction.
We do it like this:
di_block=np.loadtxt('data_files/di_rot/di_rot_example.txt') # read in some data
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=di_block,title='geographic',color='red',edge='black')
Now we calculate the principal direction using the method described inthe goprinc section.
princ=pmag.doprinc(di_block)
And note we use pmag.dodirot_V to do the rotation.
help(pmag.dodirot_V)
Help on function dodirot_V in module pmagpy.pmag: dodirot_V(di_block, Dbar, Ibar) Rotate an array of dec/inc pairs to coordinate system with Dec,Inc as 0,90 Parameters ___________________ di_block : array of [[Dec1,Inc1],[Dec2,Inc2],....] Dbar : declination of desired center Ibar : inclination of desired center Returns __________ array of rotated decs and incs: [[rot_Dec1,rot_Inc1],[rot_Dec2,rot_Inc2],....]
rot_block=pmag.dodirot_V(di_block,princ['dec'],princ['inc'])
rot_block
array([[354.75645822, 85.48653154], [ 7.99632503, 76.34238986], [218.59309456, 71.32523704], [256.72254094, 81.2757586 ], [ 36.92916127, 71.27696636], [107.0627481 , 74.65934147], [149.72796903, 84.48123415], [ 98.10291566, 69.6463126 ], [348.29161295, 72.10250018], [285.08151847, 74.70297918], [273.35642946, 68.89864852], [330.28910824, 88.29039388], [280.73571259, 70.61791032], [ 3.71124387, 76.1147856 ], [ 42.78313341, 81.09119604], [264.92037462, 82.36734047], [228.29288415, 88.01160809], [ 55.75081265, 80.3717505 ], [ 43.32707637, 84.27753112], [271.79063108, 77.21159274], [104.84899776, 83.08877923], [139.82061837, 76.3993491 ], [228.41478454, 68.21812033], [184.94964644, 85.8780573 ], [290.12100275, 80.82170974], [164.81453236, 80.16691249], [ 40.09107584, 66.25110527], [298.44492688, 70.85449532], [229.95810882, 87.16480711], [341.36209131, 56.49504557], [161.4027806 , 55.85892511], [243.45576845, 80.16805823], [ 92.95291456, 87.9768544 ], [277.0303939 , 57.70952077], [236.25509448, 77.33331201], [217.34511494, 61.35545307], [ 79.43533762, 70.47284374], [128.52228098, 52.95597767], [305.3580388 , 79.90489952], [ 74.184968 , 78.12469469], [ 21.27466927, 68.81257393], [ 23.49656306, 59.06513099], [ 44.65570709, 87.79057524], [ 57.60260544, 74.0252133 ], [120.82174991, 58.86801901], [149.68771685, 67.1484272 ], [121.72422639, 75.27918125], [181.82291633, 53.55593668], [ 33.34840452, 82.71868637], [135.24107166, 88.10701922], [312.71205472, 82.96349888], [245.36645022, 73.64842748], [ 36.71037036, 83.69777544], [ 88.92589842, 71.82553941], [303.76292348, 82.95952847], [ 46.32527218, 73.32388994], [ 8.73534352, 70.19628754], [131.5368215 , 72.47500848], [272.34524655, 65.75867311], [257.66968142, 80.83680899], [ 77.32702941, 80.56050729], [121.94427365, 63.58296413], [330.03559663, 66.08337506], [ 51.00447804, 75.7842832 ], [202.41794483, 68.43613925], [132.25871642, 71.76054598], [291.45278027, 48.82252799], [ 93.69827493, 71.5775386 ], [351.6954523 , 79.30962868], [337.44083997, 65.69494318], [266.95948753, 68.62351084], [ 83.630119 , 79.65111206], [350.24134592, 78.048473 ], [197.96350635, 83.98150407], [197.71481284, 76.14348773], [310.55033705, 78.23192585], [ 78.77356327, 58.02447289], [259.60162484, 79.15809287], [286.77106467, 60.86301322], [205.76280361, 76.9572738 ], [156.13298085, 85.59867107], [164.45839371, 78.82453195], [ 56.73345517, 68.17101814], [333.84210333, 86.57135527], [148.52724727, 85.55221936], [222.19677177, 88.35415535], [260.32772137, 77.7377499 ], [112.97103002, 77.71005599], [ 20.05703624, 86.20899093], [322.97567646, 75.06538153], [222.26720753, 55.73330804], [139.90526594, 72.40397416], [312.48926048, 79.65647491], [200.97503066, 60.43114047], [ 82.81519452, 80.55563807], [ 4.47891063, 78.31133276], [152.93461275, 86.9201173 ], [130.61844695, 86.22975549], [ 73.38056924, 83.81526149], [183.81900156, 71.53579093]])
And of course look at what we have done!
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=rot_block,color='red',title='rotated',edge='black')
[Essentials Chapter 9] [Changing coordinate systems] [command line version]
di_tilt can rotate a direction of Declination = 5.3 and Inclination = 71.6 to “stratigraphic” coordinates, assuming the strike was 135 and the dip was 21. The convention in this program is to use the dip direction, which is to the “right” of this strike.
We can perform this calculation by calling pmag.dotilt or pmag.dotilt_V() depending on if we have a single point or an array to rotate.
help(pmag.dotilt)
Help on function dotilt in module pmagpy.pmag: dotilt(dec, inc, bed_az, bed_dip) Does a tilt correction on a direction (dec,inc) using bedding dip direction and bedding dip. Parameters ---------- dec : declination directions in degrees inc : inclination direction in degrees bed_az : bedding dip direction bed_dip : bedding dip Returns ------- dec,inc : a tuple of rotated dec, inc values Examples ------- >>> pmag.dotilt(91.2,43.1,90.0,20.0) (90.952568837153436, 23.103411670066617)
help(pmag.dotilt_V)
Help on function dotilt_V in module pmagpy.pmag: dotilt_V(indat) Does a tilt correction on an array with rows of dec,inc bedding dip direction and dip. Parameters ---------- input : declination, inclination, bedding dip direction and bedding dip nested array of [[dec1, inc1, bed_az1, bed_dip1],[dec2,inc2,bed_az2,bed_dip2]...] Returns ------- dec,inc : arrays of rotated declination, inclination
# read in some data
data=np.loadtxt('data_files/di_tilt/di_tilt_example.dat') # load up the data
di_block=data[:,[0,1]] # let's look at the data first!
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
Now we can rotate them
Dt,It=pmag.dotilt_V(data) # rotate them
ipmag.plot_net(1) # and take another look
ipmag.plot_di(dec=Dt,inc=It)
Use the handy function np.column_stack to pair the decs and incs together
np.column_stack((Dt,It)) # if you want to see the output:
array([[ 3.74524673e+01, 4.95794971e+01], [ 3.36467520e+02, 6.09447203e+01], [ 3.38016562e+02, 2.29922937e+01], [ 3.55656248e+02, 7.51556739e+00], [ 8.17695697e+00, 5.86079487e+01], [ 6.24312543e+00, 2.98149642e+01], [ 3.57033733e+02, 5.00073921e+01], [ 3.42811107e+02, 5.85702274e+01], [ 3.39284414e+02, 3.48942163e-01], [ 3.85757431e+00, 2.17049062e+01], [ 3.54347623e+02, 4.89864710e+01], [ 2.83925013e-01, 4.85556186e+01], [ 3.35776430e+02, 6.39503873e+01], [ 1.81481921e+01, 3.27972491e+01], [ 3.53945383e+02, 3.12870301e+01], [ 3.08201120e+01, 4.80808730e+01], [ 2.80340193e+01, 4.25855265e+01], [ 3.52849360e+02, 3.85903328e+01], [ 3.51431548e+02, 4.79200709e+01], [ 1.49895755e+01, 5.82971278e+00], [ 2.01405693e+02, -2.73644346e+01], [ 1.94529222e+02, -6.03000930e+01], [ 1.51711653e+02, -3.44278588e+01], [ 2.02439369e+02, -5.45796578e+01], [ 1.78129642e+02, 1.35071395e+01], [ 1.74193635e+02, -3.83557833e+01], [ 1.92458609e+02, -4.42202097e+01], [ 1.82404516e+02, -1.00854450e+01], [ 1.87192313e+02, -5.16833347e+01], [ 1.58078673e+02, 5.20435367e-01], [ 1.81335139e+02, -3.65993719e+01], [ 1.94115720e+02, -4.70131320e+01], [ 1.58208438e+02, -6.99710447e+00], [ 2.04050923e+02, -2.39690981e+01], [ 1.77668058e+02, -3.69335311e+01], [ 1.79312818e+02, -5.36825261e+01], [ 1.73740461e+02, -2.78039697e+01], [ 1.86273785e+02, 1.48376413e+01], [ 1.68858936e+02, -3.72957242e+01], [ 1.68155763e+02, -2.35094484e+01], [ 1.33030870e+01, 4.22794606e+01], [ 3.48352284e+02, 5.58391509e+01], [ 3.53867446e+02, 5.47913351e+01], [ 8.65352111e+00, 3.91760462e+01], [ 3.40000000e+02, 7.00000000e+00], [ 3.36154223e+02, 4.78755564e+01], [ 3.81797572e+00, 2.54089397e+01], [ 1.43419909e+01, 1.56898879e+01], [ 3.23928409e+02, 3.67361188e+01], [ 1.27352972e+01, 5.63798793e+01], [ 1.42359910e+01, 5.15897401e+01], [ 3.48737862e+02, 1.13663807e+01], [ 3.45111147e+02, 2.68299034e+01], [ 3.50355082e+02, 4.74111159e+01], [ 3.52271716e+02, 7.31725552e+00], [ 3.89741169e+00, 3.22117431e+01], [ 3.19005940e+01, 6.79140326e+01], [ 8.12324186e+00, 4.71819989e+01], [ 3.25923242e+01, 4.86194246e+01], [ 1.89143987e+01, 1.46150626e+00], [ 1.59511050e+02, -4.84321911e+01], [ 1.64070130e+02, -8.83925684e+00], [ 1.68071076e+02, -4.98009543e+01], [ 1.73357015e+02, -3.80292679e+01], [ 1.69578333e+02, -2.55976667e+01], [ 2.01113352e+02, -3.85750376e+01], [ 1.89188527e+02, 3.16419245e+00], [ 1.68193460e+02, -2.28496326e+01], [ 1.66316564e+02, -5.16025617e+01], [ 1.78511559e+02, -2.65720102e+01], [ 1.79619680e+02, -2.69659149e+01], [ 1.80072436e+02, -2.90190442e+01], [ 1.72017252e+02, -4.69940123e+01], [ 1.71028729e+02, -3.88734330e+01], [ 1.77793107e+02, -5.79993286e+01], [ 1.94987994e+02, -2.57291361e+01], [ 1.88426223e+02, -5.68143203e+01], [ 1.66302864e+02, -2.70024954e+01], [ 1.82295485e+02, -5.75960780e+01], [ 1.81867322e+02, -4.00131799e+01]])
[Essentials Chapter 2] [command line version]
di_vgp converts directions (declination,inclination) to Virtual Geomagnetic Pole positions. This is the inverse of vgp_di. To do so, we will call pmag.dia_vgp() from within the notebook.
help(pmag.dia_vgp)
Help on function dia_vgp in module pmagpy.pmag: dia_vgp(*args) Converts directional data (declination, inclination, alpha95) at a given location (Site latitude, Site longitude) to pole position (pole longitude, pole latitude, dp, dm) Parameters ---------- Takes input as (Dec, Inc, a95, Site latitude, Site longitude) Input can be as individual values (5 parameters) or as a list of lists: [[Dec, Inc, a95, lat, lon],[Dec, Inc, a95, lat, lon]] Returns ---------- if input is individual values for one pole the return is: pole longitude, pole latitude, dp, dm if input is list of lists the return is: list of pole longitudes, list of pole latitude, list of dp, list of dm
data=np.loadtxt('data_files/di_vgp/di_vgp_example.dat') # read in some data
print (data)
[[ 11. 63. 55. 13. ] [154. -58. 45.5 -73. ]]
The data are almost in the correct format, but there is no a95 field, so that will have to be inserted (as zeros).
a95=np.zeros(len(data))
a95
array([0., 0.])
DIs=data.transpose()[0:2].transpose() # get the DIs
LatLons=data.transpose()[2:].transpose() # get the Lat Lons
newdata=np.column_stack((DIs,a95,LatLons)) # stitch them back together
print (newdata)
[[ 11. 63. 0. 55. 13. ] [154. -58. 0. 45.5 -73. ]]
vgps=np.array(pmag.dia_vgp(newdata)) # get a tuple with lat,lon,dp,dm, convert to array
print (vgps.transpose()) # print out the vgps
[[154.65869784 77.3180885 0. 0. ] [ 6.62978666 -69.63701906 0. 0. ]]
[Essentials Chapter 2] [command line version]
If we assume a geocentric axial dipole, we can calculate an expected inclination at a given latitude and that is what dipole_pinc does. It calls pmag.pinc() and so will we to find the expected inclination at a paleolatitude of 24$^{\circ}$S!
help(pmag.pinc)
Help on function pinc in module pmagpy.pmag: pinc(lat) calculate paleoinclination from latitude using dipole formula: tan(I) = 2tan(lat) Parameters ________________ lat : either a single value or an array of latitudes Returns ------- array of inclinations
lat=-24
pmag.pinc(-24)
-41.68370203503222
Or as an array
lats=range(-90,100,10)
incs=pmag.pinc(lats)
plt.plot(incs,lats)
plt.ylim(100,-100)
plt.xlabel('Latitude')
plt.ylabel('Inclination')
plt.axhline(0,color='black')
plt.axvline(0,color='black');
[Essentials Chapter 2] [command line version]
dipole_plat is similar to dipole_pinc but calculates the paleolatitude from the inclination. We will call pmag.plat():
help(pmag.plat)
Help on function plat in module pmagpy.pmag: plat(inc) calculate paleolatitude from inclination using dipole formula: tan(I) = 2tan(lat) Parameters ________________ inc : either a single value or an array of inclinations Returns ------- array of latitudes
inc=42
pmag.plat(inc)
24.237370383549177
[Essentials Chapter 2] [command line version]
pmag.dir2cart() converts directions (Declination, Inclination, Intensity) to cartesian coordinates (X,Y,Z).
help(pmag.dir2cart)
Help on function dir2cart in module pmagpy.pmag: dir2cart(d) Converts a list or array of vector directions in degrees (declination, inclination) to an array of the direction in cartesian coordinates (x,y,z) Parameters ---------- d : list or array of [dec,inc] or [dec,inc,intensity] Returns ------- cart : array of [x,y,z] Examples -------- >>> pmag.dir2cart([200,40,1]) array([-0.71984631, -0.26200263, 0.64278761])
# read in data file from example file
dirs=np.loadtxt('data_files/dir_cart/dir_cart_example.dat')
print ('Input: \n',dirs) # print out the cartesian coordinates
# print out the results
carts = pmag.dir2cart(dirs)
print ("Output: ")
for c in carts:
print ('%8.4e %8.4e %8.4e'%(c[0],c[1],c[2]))
Input: [[ 20. 46. 1.3] [175. -24. 4.2]] Output: 8.4859e-01 3.0886e-01 9.3514e-01 -3.8223e+00 3.3441e-01 -1.7083e+00
[Essentials Chapter 9] [MagIC Database] [command line version]
We use dmag_magic to plot out the decay of all alternating field demagnetization experiments in MagIC formatted files. Here we can take a look at some of the data from from Cromwell et al. (2013, doi: 10.1002/ggge.20174).
This program calls pmagplotlib.plot_mag() to plot the demagnetization curve for a sample, site, or entire data file interactively. There is a version that will prepare dataframes for plotting with this function called ipmag.plot_dmag(). So let's try that:
help(ipmag.plot_dmag)
Help on function plot_dmag in module pmagpy.ipmag: plot_dmag(data='', title='', fignum=1, norm=1) plots demagenetization data versus step for all specimens in pandas dataframe datablock Parameters ______________ data : Pandas dataframe with MagIC data model 3 columns: fignum : figure number specimen : specimen name demag_key : one of these: ['treat_temp','treat_ac_field','treat_mw_energy'] selected using method_codes : ['LT_T-Z','LT-AF-Z','LT-M-Z'] respectively intensity : one of these: ['magn_moment', 'magn_volume', 'magn_mass'] quality : the quality column of the DataFrame title : title for plot norm : if True, normalize data to first step Output : matptlotlib plot
Read in data from a MagIC data model 3 file. Let's go ahead and read it in with the full data hierarchy.
status,data=cb.add_sites_to_meas_table('data_files/dmag_magic')
data.head()
analysts | citations | description | dir_csd | dir_dec | dir_inc | experiment | magn_moment | meas_n_orient | meas_temp | ... | standard | timestamp | treat_ac_field | treat_dc_field | treat_dc_field_phi | treat_dc_field_theta | treat_temp | sequence | sample | site | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
measurement name | |||||||||||||||||||||
1 | Cromwell | This study | None | 0.3 | 190.4 | 43.5 | jm002a1:LT-NO | 5.76e-05 | 4 | 273 | ... | u | 2009-10-05T22:51:00Z | 0 | 0 | 0 | 90 | 273 | 1 | jm002a | jm002 |
1 | Cromwell | This study | None | 0.2 | 193.2 | 44.5 | jm002a2:LT-NO | 6.51e-05 | 4 | 273 | ... | u | 2009-10-05T22:51:00Z | 0 | 0 | 0 | 90 | 273 | 2 | jm002a | jm002 |
1 | Cromwell | This study | None | 0.2 | 147.5 | 50.6 | jm002b1:LT-NO | 4.97e-05 | 4 | 273 | ... | u | 2009-10-05T22:48:00Z | 0 | 0 | 0 | 90 | 273 | 3 | jm002b | jm002 |
1 | Cromwell | This study | None | 0.2 | 152.5 | 55.8 | jm002b2:LT-NO | 5.23e-05 | 4 | 273 | ... | u | 2009-10-05T23:07:00Z | 0 | 0 | 0 | 90 | 273 | 4 | jm002b | jm002 |
1 | Cromwell | This study | None | 0.2 | 186.2 | 55.7 | jm002c1:LT-NO | 5.98e-05 | 4 | 273 | ... | u | 2009-10-05T22:54:00Z | 0 | 0 | 0 | 90 | 273 | 5 | jm002c | jm002 |
5 rows × 25 columns
There are several forms of intensity measurements with different normalizations.
We could hunt through the magn_* columns to see what is non-blank or we can use the tool contribution_builder.get_intensity_col() which returns the first non-zero column.
magn_col=cb.get_intensity_col(data)
print (magn_col)
magn_moment
Let's look at what demagnetization data are available to us:
data.method_codes.unique()
array(['LT-NO', 'LT-AF-Z:LP-DIR-AF', 'LT-AF-Z:DE-VM:LP-DIR-AF', 'LT-NO:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-Z:LP-PI-TRM-ZI:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-I:LP-PI-TRM-ZI:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-PTRM-MD:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-I:LP-PI-TRM-IZ:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-Z:LP-PI-TRM-IZ:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-PTRM-I:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-I:DE-VM:LP-PI-TRM-IZ:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-I:DE-VM:LP-PI-TRM-ZI:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-T-Z:DE-VM:LP-PI-TRM-IZ:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', 'LT-AF-Z:LP-AN-ARM', 'LT-AF-I:LP-AN-ARM', 'LT-T-Z:LP-DIR-T', 'LT-T-Z:DE-VM:LP-DIR-T', 'LT-T-Z:DE-VM:LP-PI-TRM-ZI:LP-PI-TRM:LP-PI-ALT-PTRM:LP-PI-BT-MD:LP-PI-BT-IZZI', None], dtype=object)
Oops - at least one of our records has blank method_codes! so, let's get rid of that one.
data=data.dropna(subset=['method_codes'])
We can make the plots in this way:
af_df=data[data.method_codes.str.contains('LP-DIR-AF')] # select the thermal demag data
af_df=af_df.dropna(subset=['treat_ac_field'])
df=af_df[['specimen','treat_ac_field',magn_col,'quality']]
df.head()
specimen | treat_ac_field | magn_moment | quality | |
---|---|---|---|---|
measurement name | ||||
1 | jm002a1 | 0.005 | 4.55e-05 | g |
2 | jm002a1 | 0.01 | 2.08e-05 | g |
3 | jm002a1 | 0.015 | 1.47e-05 | g |
4 | jm002a1 | 0.02 | 1.15e-05 | g |
5 | jm002a1 | 0.03 | 8e-06 | g |
ipmag.plot_dmag(data=df,title="AF demag",fignum=1)
This plotted all the data in the file. we could also plot the data by site by getting a unique list of site names and then walk through them one by one
sites=af_df.site.unique()
cnt=1
for site in sites:
site_df=af_df[af_df.site==site] # fish out this site
# trim to only AF data.
site_df=site_df[['specimen','treat_ac_field',magn_col,'quality']]
ipmag.plot_dmag(data=site_df,title=site,fignum=cnt)
cnt+=1
We could repeat for thermal data if we felt like it using 'LT-T-Z' as the method_code key and treat_temp as the step. We could also save the plots using plt.savefig('FIGNAME.FMT') where FIGNAME could be the site, location, demag type as you wish.
Now let's look at a downloaded contribution using dmag_magic as before, but this time with thermal demagnetization.
ipmag.download_magic("magic_contribution_16533.txt", dir_path="data_files/download_magic",
input_dir_path="data_files/download_magic")
status,data=cb.add_sites_to_meas_table('data_files/download_magic')
df=data[data.method_codes.str.contains('LT-T-Z')] # select the thermal demag data
df=df[['specimen','treat_temp','magn_moment','quality']]
df=df.dropna(subset=['treat_temp','magn_moment'])
ipmag.plot_dmag(data=df,title="Thermal demag",fignum=1, dmag_key='treat_temp')
working on: 'contribution' 1 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/contribution.txt contribution data put in /Users/nebula/Python/PmagPy/data_files/download_magic/contribution.txt working on: 'locations' 3 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/locations.txt locations data put in /Users/nebula/Python/PmagPy/data_files/download_magic/locations.txt working on: 'sites' 52 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/sites.txt sites data put in /Users/nebula/Python/PmagPy/data_files/download_magic/sites.txt working on: 'samples' 271 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/samples.txt samples data put in /Users/nebula/Python/PmagPy/data_files/download_magic/samples.txt working on: 'specimens' 225 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/specimens.txt specimens data put in /Users/nebula/Python/PmagPy/data_files/download_magic/specimens.txt working on: 'measurements' 3072 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/measurements.txt measurements data put in /Users/nebula/Python/PmagPy/data_files/download_magic/measurements.txt working on: 'criteria' 20 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/criteria.txt criteria data put in /Users/nebula/Python/PmagPy/data_files/download_magic/criteria.txt working on: 'ages' 20 records written to file /Users/nebula/Python/PmagPy/data_files/download_magic/ages.txt ages data put in /Users/nebula/Python/PmagPy/data_files/download_magic/ages.txt
[Essentials Chapter 13] [command line version]
This program converts eigenparameters to the six tensor elements.
There is a function ipmag.eigs_s() which will do this in a notebook:
help(ipmag.eigs_s)
Help on function eigs_s in module pmagpy.ipmag: eigs_s(infile='', dir_path='.') Converts eigenparamters format data to s format Parameters ___________________ Input: file : input file name with eigenvalues (tau) and eigenvectors (V) with format: tau_1 V1_dec V1_inc tau_2 V2_dec V2_inc tau_3 V3_dec V3_inc Output the six tensor elements as a nested array [[x11,x22,x33,x12,x23,x13],....]
Ss=ipmag.eigs_s(infile="eigs_s_example.dat", dir_path='data_files/eigs_s')
for s in Ss:
print (s)
[0.33416328, 0.33280227, 0.33303446, -0.00016631071, 0.0012316267, 0.0013552071] [0.33555713, 0.33197427, 0.3324687, 0.00085685047, 0.00025266458, 0.0009815096] [0.335853, 0.33140355, 0.3327435, 0.0013230764, 0.0011778723, 4.5534102e-06] [0.3347939, 0.33140817, 0.33379796, -0.0004308845, 0.0004885784, 0.00045610438] [0.33502916, 0.33117944, 0.3337915, -0.00106313, 0.00029828132, 0.00035882858] [0.33407047, 0.3322691, 0.33366045, -6.384468e-06, 0.0009844461, 5.9963346e-05] [0.33486328, 0.33215088, 0.3329859, -0.0003427944, 0.00038177703, 0.0002014497] [0.33509853, 0.33195898, 0.33294258, 0.000769761, 0.00056717254, 0.00011960149]
[Essentials Appendix B] [command line version]
Data are frequently published as equal area projections and not listed in data tables. These data can be digitized as x,y data (assuming the outer rim is unity) and converted to approximate directions with the program eq_di. To use this program, install a graph digitizer (GraphClick from http://www.arizona-software.ch/graphclick/ works on Macs).
Digitize the data from the equal area projection saved in the file eqarea.png in the eq_di directory. You should only work on one hemisphere at a time (upper or lower) and save each hemisphere in its own file. Then you can convert the X,Y data to approximate dec and inc data - the quality of the data depends on your care in digitizing and the quality of the figure that you are digitizing.
Here we will try this out on a datafile already prepared, which are the digitized data from the lower hemisphere of a plot. You check your work with eqarea.
To do this in a notebook, we can use pmag.doeqdi().
help(pmag.doeqdi)
Help on function doeqdi in module pmagpy.pmag: doeqdi(x, y, UP=False) Takes digitized x,y, data and returns the dec,inc, assuming an equal area projection Parameters __________________ x : array of digitized x from point on equal area projection y : array of igitized y from point on equal area projection UP : if True, is an upper hemisphere projection Output : dec : declination inc : inclination
# read in the data into an array
# x is assumed first column, y, second
xy=np.loadtxt('data_files/eq_di/eq_di_example.dat').transpose()
decs,incs=pmag.doeqdi(xy[0],xy[1])
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='r',edge='black')
[Essentials Chapter 2][Essentials Appendix B] [command line version]
The problem of plotting equal area projections in Jupyter notebooks was solved by Nick Swanson-Hysell who started the ipmag module just for this purpose! We use ipmag.plot_net() to plot the net, then ipmag.plot_di() to plot the directions.
help(ipmag.plot_di)
Help on function plot_di in module pmagpy.ipmag: plot_di(dec=None, inc=None, di_block=None, color='k', marker='o', markersize=20, legend='no', label='', title='', edge='') Plot declination, inclination data on an equal area plot. Before this function is called a plot needs to be initialized with code that looks something like: >fignum = 1 >plt.figure(num=fignum,figsize=(10,10),dpi=160) >ipmag.plot_net(fignum) Required Parameters ----------- dec : declination being plotted inc : inclination being plotted or di_block: a nested list of [dec,inc,1.0] (di_block can be provided instead of dec, inc in which case it will be used) Optional Parameters (defaults are used if not specified) ----------- color : the default color is black. Other colors can be chosen (e.g. 'r') marker : the default marker is a circle ('o') markersize : default size is 20 label : the default label is blank ('') legend : the default is no legend ('no'). Putting 'yes' will plot a legend. edge : marker edge color - if blank, is color of marker
di_block=np.loadtxt('data_files/eqarea/fishrot.out')
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
[Essentials Chapter 11] [Essentials Chapter 12] [Essentials Appendix B] [command line version]
This programe makes plots of eqarea area projections and confidence ellipses for dec,inc pairs
We make the equal area projects with the ipmag.plot_net() and ipmag.plot_di() functions. The options in eqarea_ell are:
- Bingham mean and ellipse(s)
- Fisher mean(s) and alpha_95(s)
- Kent mean(s) - same as Fisher - and Kent ellipse(s)
- Bootstrapped mean(s) - same as Fisher - and ellipse(s)
- Bootstrapped eigenvectors
For Bingham mean, the N/R data are assumed antipodal and the procedure would be:
- plot the data
- calculate the bingham ellipse with pmag.dobingham()
- plot the ellipse using pmag.plot_di_mean_ellipse()
All others, the data are not assumed antipodal, and must be separated into normal and reverse modes. To do that you can either use pmag.separate_directions() to calculate ellipses for each mode, OR use pmag.flip() to flip the reverse mode to the normal mode. To calculate the ellipses:
- calculate the ellipses for each mode (or the flipped data set):
- Kent: use pmag.dokent(), setting NN to the number of data points
- Bootstrap : use pmag.di_boot() to generate the bootstrapped means
- either just plot the eigenvectors (ipmag.plot_di()) OR
- calcualate the bootstrapped ellipses with pmag.dokent() setting NN to 1
- Parametric bootstrap : you need a pandas data frame with the site mean directions, n and kappa. Then you can use pmag.dir_df_boot().
- plot the ellipses if desired.
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
di_block
array([[182.7, -64.7], [354.7, 62.8], [198.1, -68.1], [344.8, 61.8], [194. , -56.5], [350. , 56.1], [214.2, -55.3], [344.9, 56.5], [172.6, -70.7], [ 3. , 60.9], [155.2, -60.2], [ 8.4, 65.1], [183.5, -56.5], [342.5, 56.1], [175.5, -53.4], [338.9, 73.3], [169.8, -56.9], [347.1, 45.9], [183.2, -52.5], [ 12.5, 57.5]])
help(pmag.dobingham)
Help on function dobingham in module pmagpy.pmag: dobingham(di_block) Calculates the Bingham mean and associated statistical parameters from directions that are input as a di_block Parameters ---------- di_block : a nested list of [dec,inc] or [dec,inc,intensity] Returns ------- bpars : dictionary containing the Bingham mean and associated statistics dictionary keys dec : mean declination inc : mean inclination n : number of datapoints Eta : major ellipse Edec : declination of major ellipse axis Einc : inclination of major ellipse axis Zeta : minor ellipse Zdec : declination of minor ellipse axis Zinc : inclination of minor ellipse axis
help(ipmag.plot_di_mean_ellipse)
Help on function plot_di_mean_ellipse in module pmagpy.ipmag: plot_di_mean_ellipse(dictionary, fignum=1, color='k', marker='o', markersize=20, label='', legend='no') Plot a mean direction (declination, inclination) confidence ellipse. Parameters ----------- dictionary : a dictionary generated by the pmag.dobingham or pmag.dokent funcitons
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
bpars=pmag.dobingham(di_block)
ipmag.plot_di_mean_ellipse(bpars,color='red',marker='^',markersize=50)
help(pmag.separate_directions)
Help on function separate_directions in module pmagpy.pmag: separate_directions(di_block) Separates set of directions into two modes based on principal direction Parameters _______________ di_block : block of nested dec,inc pairs Return mode_1_block,mode_2_block : two lists of nested dec,inc pairs
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
mode_1,mode_2=pmag.separate_directions(di_block)
help(ipmag.fisher_mean)
Help on function fisher_mean in module pmagpy.ipmag: fisher_mean(dec=None, inc=None, di_block=None) Calculates the Fisher mean and associated parameters from either a list of declination values and a separate list of inclination values or from a di_block (a nested list a nested list of [dec,inc,1.0]). Returns a dictionary with the Fisher mean and statistical parameters. Parameters ---------- dec : list of declinations or longitudes inc : list of inclinations or latitudes di_block : a nested list of [dec,inc,1.0] A di_block can be provided instead of dec, inc lists in which case it will be used. Either dec, inc lists or a di_block need to be provided. Returns ------- fisher_mean : dictionary containing the Fisher mean parameters Examples -------- Use lists of declination and inclination to calculate a Fisher mean: >>> ipmag.fisher_mean(dec=[140,127,142,136],inc=[21,23,19,22]) {'alpha95': 7.292891411309177, 'csd': 6.4097743211340896, 'dec': 136.30838974272072, 'inc': 21.347784026899987, 'k': 159.69251473636305, 'n': 4, 'r': 3.9812138971889026} Use a di_block to calculate a Fisher mean (will give the same output as the example with the lists): >>> ipmag.fisher_mean(di_block=[[140,21],[127,23],[142,19],[136,22]])
mode_1_fpars=ipmag.fisher_mean(di_block=mode_1)
mode_2_fpars=ipmag.fisher_mean(di_block=mode_2)
help(ipmag.plot_di_mean)
Help on function plot_di_mean in module pmagpy.ipmag: plot_di_mean(dec, inc, a95, color='k', marker='o', markersize=20, label='', legend='no') Plot a mean direction (declination, inclination) with alpha_95 ellipse on an equal area plot. Before this function is called, a plot needs to be initialized with code that looks something like: >fignum = 1 >plt.figure(num=fignum,figsize=(10,10),dpi=160) >ipmag.plot_net(fignum) Required Parameters ----------- dec : declination of mean being plotted inc : inclination of mean being plotted a95 : a95 confidence ellipse of mean being plotted Optional Parameters (defaults are used if not specified) ----------- color : the default color is black. Other colors can be chosen (e.g. 'r'). marker : the default is a circle. Other symbols can be chosen (e.g. 's'). markersize : the default is 20. Other sizes can be chosen. label : the default is no label. Labels can be assigned. legend : the default is no legend ('no'). Putting 'yes' will plot a legend.
# plot the data
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
# draw on the means and lpha95
ipmag.plot_di_mean(dec=mode_1_fpars['dec'],inc=mode_1_fpars['inc'],a95=mode_1_fpars['alpha95'],\
marker='*',color='blue',markersize=50)
ipmag.plot_di_mean(dec=mode_2_fpars['dec'],inc=mode_2_fpars['inc'],a95=mode_2_fpars['alpha95'],\
marker='*',color='blue',markersize=50)
help(pmag.dokent)
Help on function dokent in module pmagpy.pmag: dokent(data, NN) gets Kent parameters for data Parameters ___________________ data : nested pairs of [Dec,Inc] NN : normalization NN is the number of data for Kent ellipse NN is 1 for Kent ellipses of bootstrapped mean directions Return kpars dictionary keys dec : mean declination inc : mean inclination n : number of datapoints Eta : major ellipse Edec : declination of major ellipse axis Einc : inclination of major ellipse axis Zeta : minor ellipse Zdec : declination of minor ellipse axis Zinc : inclination of minor ellipse axis
mode_1_kpars=pmag.dokent(mode_1,len(mode_1))
mode_2_kpars=pmag.dokent(mode_2,len(mode_2))
# plot the data
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
# draw on the means and lpha95
ipmag.plot_di_mean_ellipse(mode_1_kpars,marker='*',color='cyan',markersize=20)
ipmag.plot_di_mean_ellipse(mode_2_kpars,marker='*',color='cyan',markersize=20)
help(pmag.di_boot)
Help on function di_boot in module pmagpy.pmag: di_boot(DIs, nb=5000) returns bootstrap means for Directional data Parameters _________________ DIs : nested list of Dec,Inc pairs nb : number of bootstrap pseudosamples Returns ------- BDIs: nested list of bootstrapped mean Dec,Inc pairs
mode_1_BDIs=pmag.di_boot(mode_1)
mode_2_BDIs=pmag.di_boot(mode_2)
ipmag.plot_net(1)
ipmag.plot_di(di_block=mode_1_BDIs,color='cyan',markersize=1)
ipmag.plot_di(di_block=mode_2_BDIs,color='cyan',markersize=1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
mode_1_bpars=pmag.dokent(mode_1_BDIs,1)
mode_2_bpars=pmag.dokent(mode_2_BDIs,1)
# plot the data
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
# draw on the means and lpha95
ipmag.plot_di_mean_ellipse(mode_1_bpars,marker='*',color='cyan',markersize=20)
ipmag.plot_di_mean_ellipse(mode_2_bpars,marker='*',color='cyan',markersize=20)
[Essentials Chapter 2] [MagIC Database] [command line version]
eqarea_magic takes MagIC data model 3 files and makes equal area projections of declination, inclination data for a variety of selections,
i.e. all the data, by site, by sample, or by specimen
It has the option to plot in different coordinate systems (if available) and various ellipses. It will also make a color contour plot if desired.
We will do this with ipmag.plot_net() and ipmag_plot_di() using Pandas filtering capability.
Let's start with a simple plot of site mean directions, assuming that they were interpreted from measurements using pmag_gui.py or some such program and have all the required meta-data.
We want data in geographic coordinates (dir_tilt_correction=0). The keys for directions are dir_dec and dir_inc. One could add the ellipses using ipmag.plot_di_mean_ellipse().
sites=pd.read_csv('data_files/eqarea_magic/sites.txt',sep='\t',header=1)
site_dirs=sites[sites['dir_tilt_correction']==0]
ipmag.plot_net(1)
di_block=sites[['dir_dec','dir_inc']].values
#ipmag.plot_di(sites['dir_dec'].values,sites['dir_inc'].values,color='blue',markersize=50)
ipmag.plot_di(di_block=di_block,color='blue',markersize=50)
# or, using ipmag.eqarea_magic:
ipmag.eqarea_magic('data_files/eqarea_magic/sites.txt', save_plots=False)
-W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data for data propagation 388 records read from data_files/eqarea_magic/sites.txt All
(True, [])
for this we can use the function pmagplotlib.plot_eq_cont() which makes a color contour of a dec, inc data
help(pmagplotlib.plot_eq_cont)
Help on function plot_eq_cont in module pmagpy.pmagplotlib: plot_eq_cont(fignum, DIblock, color_map='coolwarm') plots dec inc block as a color contour Parameters __________________ Input: fignum : figure number DIblock : nested pairs of [Declination, Inclination] color_map : matplotlib color map [default is coolwarm] Output: figure
ipmag.plot_net(1)
pmagplotlib.plot_eq_cont(1,di_block)
# with ipmag.eqarea_magic
ipmag.eqarea_magic('data_files/eqarea_magic/sites.txt', save_plots=False, contour=True)
-W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data for data propagation 388 records read from data_files/eqarea_magic/sites.txt All
(True, [])
This study averaged specimens (not samples) by site, so we would like to make plots of all the specimen data for each site. We can do things the in a similar way to what we did in the dmag_magic example.
A few particulars:
# read in specimen table
spec_df=pd.read_csv('data_files/eqarea_magic/specimens.txt',sep='\t',header=1)
# read in sample table
samp_df=pd.read_csv('data_files/eqarea_magic/samples.txt',sep='\t',header=1)
# get only what we need from samples (sample to site mapping)
samp_df=samp_df[['sample','site']]
# merge site to specimen name in the specimen data frame
df_ext=pd.merge(spec_df,samp_df,how='inner',on='sample')
# truncate to the first 10 sites
sites=df_ext.site.unique()[0:11]
We need to filter specimen data for dir_tilt_correction=0 and separate into DE-BFP (best fit planes) and not.
# get the geographic coordinates
spec_df=df_ext[spec_df.dir_tilt_correction==0]
# filter to exclude planes
spec_lines=spec_df[spec_df.method_codes.str.contains('DE-BFP')==False]
# filter for planes
spec_df_gc=spec_df[spec_df.method_codes.str.contains('DE-BFP')==True]
# here's a new one:
help(ipmag.plot_gc)
Help on function plot_gc in module pmagpy.ipmag: plot_gc(poles, color='g', fignum=1) plots a great circle on an equal area projection Parameters ____________________ Input fignum : number of matplotlib object poles : nested list of [Dec,Inc] pairs of poles color : color of lower hemisphere dots for great circle - must be in form: 'g','r','y','k',etc. upper hemisphere is always cyan
cnt=1
for site in sites:
plt.figure(cnt)
ipmag.plot_net(cnt)
plt.title(site)
site_lines=spec_lines[spec_lines.site==site] # fish out this site
ipmag.plot_di(site_lines.dir_dec.values,site_lines.dir_inc.values)
site_planes=spec_df_gc[spec_df_gc.site==site]
poles=site_planes[['dir_dec','dir_inc']].values
if poles.shape[0]>0:
ipmag.plot_gc(poles,fignum=cnt,color='r')
cnt+=1
# using ipmag.eqarea_magic:
ipmag.eqarea_magic('specimens.txt', 'data_files/eqarea_magic', plot_by='sit', save_plots=False)
1374 records read from specimens.txt mc01 mc02 mc03 mc04 mc06
(True, [])
We can do this like this:
# read in measurements table
meas_df=pd.read_csv('data_files/eqarea_magic/measurements.txt',sep='\t',header=1)
specimens=meas_df.specimen.unique()[0:11]
cnt=1
for spec in specimens:
meas_spc=meas_df[meas_df.specimen==spec]
plt.figure(cnt)
ipmag.plot_net(cnt)
plt.title(spec)
ipmag.plot_di(meas_spc.dir_dec.values,meas_spc.dir_inc.values)
cnt+=1
# using ipmag.eqarea_magic:
ipmag.eqarea_magic('specimens.txt', 'data_files/eqarea_magic', plot_by='spc', save_plots=False)
1374 records read from specimens.txt mc01a no records for plotting mc01b mc01c mc01d mc01e
(True, [])
[Essentials Chapter 14] [MagIC Database] [command line version]
This program is meant to find the unflattening factor (see unsquish documentation) that brings a sedimentary data set into agreement with the statistical field model TK03 of Tauxe and Kent (2004, doi: 10.1029/145GM08). It has been implemented for notebooks as ipmag.find_ei().
A data file (data_files/find_EI/find_EI_example.dat) was prepared using the program tk03 to simulate directions at a latitude of 42$^{\circ}$. with an expected inclination of 61$^{\circ}$ (which could be gotten using dipole_pinc of course.
help(ipmag.find_ei)
Help on function find_ei in module pmagpy.ipmag: find_ei(data, nb=1000, save=False, save_folder='.', fmt='svg', site_correction=False, return_new_dirs=False) Applies series of assumed flattening factor and "unsquishes" inclinations assuming tangent function. Finds flattening factor that gives elongation/inclination pair consistent with TK03; or, if correcting by site instead of for study-level secular variation, finds flattening factor that minimizes elongation and most resembles a Fisherian distribution. Finds bootstrap confidence bounds Required Parameter ----------- data: a nested list of dec/inc pairs Optional Parameters (defaults are used unless specified) ----------- nb: number of bootstrapped pseudo-samples (default is 1000) save: Boolean argument to save plots (default is False) save_folder: path to folder in which plots should be saved (default is current directory) fmt: specify format of saved plots (default is 'svg') site_correction: Boolean argument to specify whether to "unsquish" data to 1) the elongation/inclination pair consistent with TK03 secular variation model (site_correction = False) or 2) a Fisherian distribution (site_correction = True). Default is FALSE. Note that many directions (~ 100) are needed for this correction to be reliable. return_new_dirs: optional return of newly "unflattened" directions (default is False) Returns ----------- four plots: 1) equal area plot of original directions 2) Elongation/inclination pairs as a function of f, data plus 25 bootstrap samples 3) Cumulative distribution of bootstrapped optimal inclinations plus uncertainties. Estimate from original data set plotted as solid line 4) Orientation of principle direction through unflattening NOTE: If distribution does not have a solution, plot labeled: Pathological. Some bootstrap samples may have valid solutions and those are plotted in the CDFs and E/I plot.
data=np.loadtxt('data_files/find_EI/find_EI_example.dat')
ipmag.find_ei(data)
Bootstrapping.... be patient The original inclination was: 38.92904490925402 The corrected inclination is: 58.83246032206779 with bootstrapped confidence bounds of: 48.28842274376625 to 67.01516864172395 and elongation parameter of: 1.4678654859428288 The flattening factor is: 0.4249999999999995
In this example, the original expected inclination at paleolatitude of 42 (61$^{\circ}$) is recovered within the 95% confidence bounds.
pmag.fcalc() returns the values of an F-test from an F table.
help(pmag.fcalc)
Help on function fcalc in module pmagpy.pmag: fcalc(col, row) looks up an F-test stastic from F tables F(col,row), where row is number of degrees of freedom - this is 95% confidence (p=0.05). Parameters _________ col : degrees of freedom column row : degrees of freedom row Returns F : value for 95% confidence from the F-table
[Essentials Chapter 11] [command line version]
fisher draws $N$ directions from a Fisher distribution with specified $\kappa$ and a vertical mean. (For other directions see fishrot). To do this, we can just call the function pmag.fshdev() $N$ times.
help(pmag.fshdev)
Help on function fshdev in module pmagpy.pmag: fshdev(k) Generate a random draw from a Fisher distribution with mean declination of 0 and inclination of 90 with a specified kappa. Parameters ---------- k : kappa (precision parameter) of the distribution k can be a single number or an array of values Returns ---------- dec, inc : declination and inclination of random Fisher distribution draw if k is an array, dec, inc are returned as arrays, otherwise, single values
# set the number, N, and kappa
N,kappa=100,20
# a basket to put our fish in
fish=[]
# get the Fisherian deviates
for i in range(N):
d,i=pmag.fshdev(kappa)
fish.append([d,i])
ipmag.plot_net(1)
ipmag.plot_di(di_block=fish,color='r',edge='black')
[Essentials Chapter 11] [command line version]
This program was meant to test whether a given directional data set is Fisher distributed using a Quantile-Quantile plot (see also qqunf or qqplot for more on Quantile-Quantile plots).
Blessedly, fishqq has been incorporated into ipmag.fishqq() for use within notebooks.
help(ipmag.fishqq)
Help on function fishqq in module pmagpy.ipmag: fishqq(lon=None, lat=None, di_block=None) Test whether a distribution is Fisherian and make a corresponding Q-Q plot. The Q-Q plot shows the data plotted against the value expected from a Fisher distribution. The first plot is the uniform plot which is the Fisher model distribution in terms of longitude (declination). The second plot is the exponential plot which is the Fisher model distribution in terms of latitude (inclination). In addition to the plots, the test statistics Mu (uniform) and Me (exponential) are calculated and compared against the critical test values. If Mu or Me are too large in comparision to the test statistics, the hypothesis that the distribution is Fisherian is rejected (see Fisher et al., 1987). Parameters: ----------- lon : longitude or declination of the data lat : latitude or inclination of the data or di_block: a nested list of [dec,inc] A di_block can be provided in which case it will be used instead of dec, inc lists. Output: ----------- dictionary containing lon : mean longitude (or declination) lat : mean latitude (or inclination) N : number of vectors Mu : Mu test statistic value for the data Mu_critical : critical value for Mu Me : Me test statistic value for the data Me_critical : critical value for Me if the data has two modes with N >=10 (N and R) two of these dictionaries will be returned Examples -------- In this example, directions are sampled from a Fisher distribution using ``ipmag.fishrot`` and then the ``ipmag.fishqq`` function is used to test whether that distribution is Fisherian: >>> directions = ipmag.fishrot(k=40, n=50, dec=200, inc=50) >>> ipmag.fishqq(di_block = directions) {'Dec': 199.73564290371894, 'Inc': 49.017612342358298, 'Me': 0.78330310031220352, 'Me_critical': 1.094, 'Mode': 'Mode 1', 'Mu': 0.69915926146177099, 'Mu_critical': 1.207, 'N': 50, 'Test_result': 'consistent with Fisherian model'} The above example passed a di_block to the function as an input. Lists of paired declination and inclination can also be used as inputs. Here the directions di_block is unpacked to separate declination and inclination lists using the ``ipmag.unpack_di_block`` functionwhich are then used as input to fishqq: >>> dec_list, inc_list = ipmag.unpack_di_block(directions) >>> ipmag.fishqq(lon=dec_list, lat=inc_list)
di_block=np.loadtxt('data_files/fishqq/fishqq_example.txt')
fqpars=ipmag.fishqq(di_block=di_block)
print (fqpars['Test_result'])
consistent with Fisherian model
[Essentials Chapter 11] [command line version]
This program is similar to fisher, but allows you to specify the mean direction. This has been implemented as ipmag.fishrot().
help(ipmag.fishrot)
Help on function fishrot in module pmagpy.ipmag: fishrot(k=20, n=100, dec=0, inc=90, di_block=True) Generates Fisher distributed unit vectors from a specified distribution using the pmag.py fshdev and dodirot functions. Parameters ---------- k : kappa precision parameter (default is 20) n : number of vectors to determine (default is 100) dec : mean declination of distribution (default is 0) inc : mean inclination of distribution (default is 90) di_block : this function returns a nested list of [dec,inc,1.0] as the default if di_block = False it will return a list of dec and a list of inc Returns --------- di_block : a nested list of [dec,inc,1.0] (default) dec, inc : a list of dec and a list of inc (if di_block = False) Examples -------- >>> ipmag.fishrot(k=20, n=5, dec=40, inc=60) [[44.766285502555775, 37.440866867657235, 1.0], [33.866315796883725, 64.732532250463436, 1.0], [47.002912770597163, 54.317853800896977, 1.0], [36.762165614432547, 56.857240672884252, 1.0], [71.43950604474395, 59.825830945715431, 1.0]]
rotdi=ipmag.fishrot(k=50,n=5,dec=33,inc=41)
for di in rotdi:
print ('%7.1f %7.1f'%(di[0],di[1]))
43.2 46.6 14.9 27.2 33.3 48.2 3.5 44.4 34.3 54.5
ipmag.plot_net(1)
ipmag.plot_di(di_block=rotdi)
Fisher statistics requires unimodal data (all in one direction with no reversals) but many paleomagnetic data sets are bimodal. To flip bimodal data into a single mode, we can use pmag.flip( ). This function calculates the principle direction and flips all the 'reverse' data to the 'normal' direction along the principle axis.
help(pmag.flip)
Help on function flip in module pmagpy.pmag: flip(di_block, combine=False) determines 'normal' direction along the principle eigenvector, then flips the antipodes of the reverse mode to the antipode Parameters ___________ di_block : nested list of directions Return D1 : normal mode D2 : flipped reverse mode as two DI blocks combine : if True return combined D1, D2, nested D,I pairs
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,flipped=pmag.flip(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red')
ipmag.plot_di(di_block=flipped,color='b')
[Essentials Chapter 12] [command line version]
foldtest uses the fold test of Tauxe and Watson (1994, 10.1016/0012-821x(94)90006-x ) to find the degree of unfolding that produces the tightest distribution of directions (using the eigenvalue $\tau_1$ as the criterion.
This can be done via pmag.bootstrap_fold_test(). Note that this can take several minutes.
help(ipmag.bootstrap_fold_test)
Help on function bootstrap_fold_test in module pmagpy.ipmag: bootstrap_fold_test(Data, num_sims=1000, min_untilt=-10, max_untilt=120, bedding_error=0, save=False, save_folder='.', fmt='svg', ninety_nine=False) Conduct a bootstrap fold test (Tauxe and Watson, 1994) Three plots are generated: 1) equal area plot of uncorrected data; 2) tilt-corrected equal area plot; 3) bootstrap results showing the trend of the largest eigenvalues for a selection of the pseudo-samples (red dashed lines), the cumulative distribution of the eigenvalue maximum (green line) and the confidence bounds that enclose 95% of the pseudo-sample maxima. If the confidence bounds enclose 100% unfolding, the data "pass" the fold test. Parameters ---------- Data : a numpy array of directional data [dec, inc, dip_direction, dip] num_sims : number of bootstrap samples (default is 1000) min_untilt : minimum percent untilting applied to the data (default is -10%) max_untilt : maximum percent untilting applied to the data (default is 120%) bedding_error : (circular standard deviation) for uncertainty on bedding poles save : optional save of plots (default is False) save_folder : path to directory where plots should be saved fmt : format of figures to be saved (default is 'svg') ninety_nine : changes confidence bounds from 95 percent to 99 if True Returns ------- three plots : uncorrected data equal area plot, tilt-corrected data equal area plot, bootstrap results and CDF of the eigenvalue maximum Examples -------- Data in separate lists of dec, inc, dip_direction, dip data can be made into the needed array using the ``ipmag.make_diddd_array`` function. >>> dec = [132.5,124.3,142.7,130.3,163.2] >>> inc = [12.1,23.2,34.2,37.7,32.6] >>> dip_direction = [265.0,265.0,265.0,164.0,164.0] >>> dip = [20.0,20.0,20.0,72.0,72.0] >>> data_array = ipmag.make_diddd_array(dec,inc,dip_direction,dip) >>> data_array array([[ 132.5, 12.1, 265. , 20. ], [ 124.3, 23.2, 265. , 20. ], [ 142.7, 34.2, 265. , 20. ], [ 130.3, 37.7, 164. , 72. ], [ 163.2, 32.6, 164. , 72. ]]) This array can then be passed to the function: >>> ipmag.bootstrap_fold_test(data_array)
data=np.loadtxt('data_files/foldtest/foldtest_example.dat')
ipmag.bootstrap_fold_test(data, num_sims=300)
doing 300 iterations...please be patient..... tightest grouping of vectors obtained at (95% confidence bounds): 84 - 119 percent unfolding range of all bootstrap samples: 75 - 119 percent unfolding
This is just the MagIC formatted file version of foldtest and can be done using ipmag.bootstrap_fold_test() as above. We just have to read in the MagIC formattest files and make a data matrix of the format expected by ipmag.bootstrap_fold_test(). Here, Pandas is our friend. We will:
sites=pd.read_csv('data_files/foldtest_magic/sites.txt',sep='\t',header=1)
sites.columns
Index(['bed_dip', 'bed_dip_direction', 'citations', 'conglomerate_test', 'contact_test', 'description', 'dir_dec', 'dir_inc', 'dir_k', 'dir_n_samples', 'dir_nrm_origin', 'dir_polarity', 'dir_tilt_correction', 'geologic_classes', 'geologic_types', 'lat', 'lithologies', 'location', 'lon', 'method_codes', 'result_quality', 'result_type', 'site', 'vgp_lat', 'vgp_lon', 'vgp_n_samples'], dtype='object')
The columns we need are: dir_dec, dir_inc, bed_dip_direction, bed_dip The dir_dec and dir_inc have to have a dir_tilt_correction of 0 (geographic coordinates). A little looking through the sites data file shows that the bed_dip_direction are on a separate line (oh database conversion tool maestro, how clever!). So we will have to pair the bedding orientations with the geographic directional info. Thank goodness for Pandas!
# read in data file
sites=pd.read_csv('data_files/foldtest_magic/sites.txt',sep='\t',header=1)
# get the records with bed_dip and bed_dip_direction
sites_bedding=sites.dropna(subset=['bed_dip','bed_dip_direction'])
# get rid of them out of the original data frame
sites.drop(['bed_dip','bed_dip_direction'],axis=1,inplace=True)
# just pick out what we want (bedding orientation of the sites)
sites_bedding=sites_bedding[['site','bed_dip','bed_dip_direction']]
# put them back into the original data frame
sites=pd.merge(sites,sites_bedding,how='inner',on='site')
# now we can pick out the desired coordinate system
sites_geo=sites[sites.dir_tilt_correction==0]
# and make our data array
data=sites_geo[['dir_dec','dir_inc','bed_dip_direction','bed_dip']].values
NB: One unfortunate thing about the MagIC data model is that bedding orientation information can be either in the samples.txt or the sites.txt file. This example assumes the data are in the sites.txt file. If not, you can read in the samples.txt file and merge the bedding information with the site directions.
# and off we go!
ipmag.bootstrap_fold_test(data, num_sims=300)
doing 300 iterations...please be patient..... tightest grouping of vectors obtained at (95% confidence bounds): 99 - 119 percent unfolding range of all bootstrap samples: 82 - 119 percent unfolding
from programs.forc_diagram import *
forc = Forc(fileAdres='data_files/forc_diagram/conventional_example.forc',SF=3)
fig = plt.figure(figsize=(6,5), facecolor='white')
fig.subplots_adjust(left=0.18, right=0.97,
bottom=0.18, top=0.9, wspace=0.5, hspace=0.5)
plt.contour(forc.xi*1000,
forc.yi*1000,
forc.zi,9,
colors='k',linewidths=0.5)#mt to T
plt.pcolormesh(forc.xi*1000,
forc.yi*1000,
forc.zi,
cmap=plt.get_cmap('rainbow'))#vmin=np.min(rho)-0.2)
plt.colorbar()
plt.xlabel('B$_{c}$ (mT)',fontsize=12)
plt.ylabel('B$_{i}$ (mT)',fontsize=12)
plt.show()
[Essentials Chapter 11] [command line version]
This program generates sets of data drawn from a normal distribution with a given mean and standard deviation. It is just a wrapper for a call to pmag.gaussdev() which just calls numpy.random.normal() which we could do, but we would have to import it, so it is easiest just to call the pmag version which we have already imported.
help(pmag.gaussdev)
Help on function gaussdev in module pmagpy.pmag: gaussdev(mean, sigma, N=1) returns a number randomly drawn from a gaussian distribution with the given mean, sigma Parmeters: _____________________________ mean : mean of the gaussian distribution from which to draw deviates sigma : standard deviation of same N : number of deviates desired Returns ------- N deviates from the normal distribution from .
N=1000
bins=100
norm=pmag.gaussdev(10,3,N)
plt.hist(norm,bins=bins,color='black',histtype='step',density=True)
plt.xlabel('Gaussian Deviates')
plt.ylabel('Frequency');
# alternatively:
ipmag.histplot(data=norm, xlab='Gaussian Deviates', save_plots=False, norm=-1)
trying twin
[Essentials Chapter 12] [command line version]
gobing calculates Bingham statistics for sets of directional data (see documentation for eqarea_ell for nice examples. We do this by calling pmag.dobingham().
help(pmag.dobingham)
Help on function dobingham in module pmagpy.pmag: dobingham(di_block) Calculates the Bingham mean and associated statistical parameters from directions that are input as a di_block Parameters ---------- di_block : a nested list of [dec,inc] or [dec,inc,intensity] Returns ------- bpars : dictionary containing the Bingham mean and associated statistics dictionary keys dec : mean declination inc : mean inclination n : number of datapoints Eta : major ellipse Edec : declination of major ellipse axis Einc : inclination of major ellipse axis Zeta : minor ellipse Zdec : declination of minor ellipse axis Zinc : inclination of minor ellipse axis
di_block=np.loadtxt('data_files/gobing/gobing_example.txt')
pmag.dobingham(di_block)
{'dec': 357.77952733337463, 'inc': 60.3168380083183, 'Edec': 105.71735145158095, 'Einc': 9.956900268236785, 'Zdec': 20.99389065755772, 'Zinc': -27.647853556651516, 'n': 20, 'Zeta': 4.480026907803641, 'Eta': 4.4907543191720025}
[Essentials Chapter 11] [command line version]
gofish calculates Fisher statistics for sets of directional data. (see documentation for eqarea_ell for nice examples. This can be done with ipmag.fisher_mean().
help(ipmag.fisher_mean)
Help on function fisher_mean in module pmagpy.ipmag: fisher_mean(dec=None, inc=None, di_block=None) Calculates the Fisher mean and associated parameters from either a list of declination values and a separate list of inclination values or from a di_block (a nested list a nested list of [dec,inc,1.0]). Returns a dictionary with the Fisher mean and statistical parameters. Parameters ---------- dec : list of declinations or longitudes inc : list of inclinations or latitudes di_block : a nested list of [dec,inc,1.0] A di_block can be provided instead of dec, inc lists in which case it will be used. Either dec, inc lists or a di_block need to be provided. Returns ------- fisher_mean : dictionary containing the Fisher mean parameters Examples -------- Use lists of declination and inclination to calculate a Fisher mean: >>> ipmag.fisher_mean(dec=[140,127,142,136],inc=[21,23,19,22]) {'alpha95': 7.292891411309177, 'csd': 6.4097743211340896, 'dec': 136.30838974272072, 'inc': 21.347784026899987, 'k': 159.69251473636305, 'n': 4, 'r': 3.9812138971889026} Use a di_block to calculate a Fisher mean (will give the same output as the example with the lists): >>> ipmag.fisher_mean(di_block=[[140,21],[127,23],[142,19],[136,22]])
di_block=np.loadtxt('data_files/gofish/fishrot.out')
ipmag.fisher_mean(di_block=di_block)
{'dec': 10.783552984917437, 'inc': 39.602582993520244, 'n': 10, 'r': 9.848433230859508, 'k': 59.379770717798884, 'alpha95': 6.320446730051139, 'csd': 10.511525802823254}
There is also a function pmag.dir_df_fisher_mean() that calculates Fisher statistics on a Pandas DataFrame with directional data
help(pmag.dir_df_fisher_mean)
Help on function dir_df_fisher_mean in module pmagpy.pmag: dir_df_fisher_mean(dir_df) calculates fisher mean for Pandas data frame Parameters __________ dir_df: pandas data frame with columns: dir_dec : declination dir_inc : inclination Returns ------- fpars : dictionary containing the Fisher mean and statistics dec : mean declination inc : mean inclination r : resultant vector length n : number of data points k : Fisher k value csd : Fisher circular standard deviation alpha95 : Fisher circle of 95% confidence
# make the data frame
dir_df=pd.read_csv('data_files/gofish/fishrot.out',delim_whitespace=True, header=None)
dir_df.columns=['dir_dec','dir_inc']
pmag.dir_df_fisher_mean(dir_df)
{'dec': 10.78355298491744, 'inc': 39.60258299352024, 'n': 10, 'r': 9.848433230859508, 'k': 59.379770717798884, 'alpha95': 6.320446730051139, 'csd': 10.511525802823254}
[Essentials Chapter 12] [command line version]
With gokent we can calculate Kent statistics on sets of directional data (see documentation for eqarea_ell for nice examples..
This calls pmag.dokent() (see also eqarea_ell example)
help(pmag.dokent)
Help on function dokent in module pmagpy.pmag: dokent(data, NN) gets Kent parameters for data Parameters ___________________ data : nested pairs of [Dec,Inc] NN : normalization NN is the number of data for Kent ellipse NN is 1 for Kent ellipses of bootstrapped mean directions Return kpars dictionary keys dec : mean declination inc : mean inclination n : number of datapoints Eta : major ellipse Edec : declination of major ellipse axis Einc : inclination of major ellipse axis Zeta : minor ellipse Zdec : declination of minor ellipse axis Zinc : inclination of minor ellipse axis
di_block=np.loadtxt('data_files/gokent/gokent_example.txt')
pmag.dokent(di_block,di_block.shape[0])
{'dec': 359.1530456710398, 'inc': 55.03341554254794, 'n': 20, 'Zdec': 246.82080930796928, 'Zinc': 14.881429411175574, 'Edec': 147.69921287231705, 'Einc': 30.819395154843157, 'Zeta': 7.805151237185049, 'Eta': 9.304659303299626}
[Essentials Chapter 12] [command line version]
goprinc calculates the principal directions (and their eigenvalues) for sets of paleomagnetic vectors. It doesn't do any statistics on them, unlike the other programs. We will call pmag.doprinc():
help(pmag.doprinc)
Help on function doprinc in module pmagpy.pmag: doprinc(data) Gets principal components from data in form of a list of [dec,inc] data. Parameters ---------- data : nested list of dec, inc directions Returns ------- ppars : dictionary with the principal components dec : principal directiion declination inc : principal direction inclination V2dec : intermediate eigenvector declination V2inc : intermediate eigenvector inclination V3dec : minor eigenvector declination V3inc : minor eigenvector inclination tau1 : major eigenvalue tau2 : intermediate eigenvalue tau3 : minor eigenvalue N : number of points Edir : elongation direction [dec, inc, length]
di_block=np.loadtxt('data_files/goprinc/goprinc_example.txt')
pmag.doprinc(di_block)
{'Edir': array([151.85261736, 29.07891169, 1. ]), 'dec': 3.869443846664467, 'inc': 56.740159941913355, 'N': 20, 'tau1': 0.8778314142896239, 'tau2': 0.07124540042876253, 'tau3': 0.05092318528161358, 'V2dec': 151.85261735984162, 'V2inc': 29.078911691227447, 'V3dec': 250.25426093396385, 'V3inc': 14.721055437689328}
[Essentials Chapter 5]
[Essentials Chapter 7]
[Essentials Appendix C]
[MagIC Database]
[command line version]
This program plots MagIC formatted measurement data as hysteresis loops, $\Delta$M, d$\Delta$M and backfield curves, depending on what data are available. There is an ipmag.hysteresis_magic function that does this for us.
help(ipmag.hysteresis_magic)
Help on function hysteresis_magic in module pmagpy.ipmag: hysteresis_magic(output_dir_path='.', input_dir_path='', spec_file='specimens.txt', meas_file='measurements.txt', fmt='svg', save_plots=True, make_plots=True, pltspec='', n_specs=5, interactive=False) Calculate hysteresis parameters and plot hysteresis data. Plotting may be called interactively with save_plots==False, or be suppressed entirely with make_plots==False. Parameters ---------- output_dir_path : str, default "." Note: if using Windows, all figures will be saved to working directly *not* dir_path input_dir_path : str path for intput file if different from output_dir_path (default is same) spec_file : str, default "specimens.txt" output file to save hysteresis data meas_file : str, default "measurements.txt" input measurement file fmt : str, default "svg" format for figures, [svg, jpg, pdf, png] save_plots : bool, default True if True, generate and save all requested plots make_plots : bool, default True if False, skip making plots and just save hysteresis data (if False, save_plots will be set to False also) pltspec : str, default "" specimen name to plot, otherwise will plot all specimens n_specs : int number of specimens to plot, default 5 if you want to make all possible plots, specify "all" interactive : bool, default False interactively plot and display for each specimen (this is best used on the command line or in the Python interpreter) Returns --------- Tuple : (True or False indicating if conversion was sucessful, output file names written)
So let's try this out with some data from Ben-Yosef et al., (2008;doi: 10.1029/2007JB005235). The default is to plot the first 5 specimens and that is enough for us. We also do not need to save plots at this point.
ipmag.hysteresis_magic(output_dir_path='data_files/hysteresis_magic/',save_plots=False)
Plots may be on top of each other - use mouse to place IS06a-1 1 out of 5 plotting IRM IS06a-2 2 out of 5 plotting IRM IS06a-3 3 out of 5 plotting IRM IS06a-4 4 out of 5 plotting IRM IS06a-5 5 out of 5 plotting IRM -I- overwriting /Users/nebula/Python/PmagPy/data_files/hysteresis_magic/specimens.txt -I- 14 records written to specimens file hysteresis parameters saved in /Users/nebula/Python/PmagPy/data_files/hysteresis_magic/specimens.txt
(True, ['/Users/nebula/Python/PmagPy/data_files/hysteresis_magic/specimens.txt'])
[Essentials Chapter 2] [command line version]
This program gives geomagnetic field vector data for a specified place at a specified time. It has many built in models including IGRFs, GUFM and several archeomagnetic models. It calls the function ipmag.igrf() for this so that is what we will do.
help(ipmag.igrf)
Help on function igrf in module pmagpy.ipmag: igrf(input_list, mod='', ghfile='') Determine Declination, Inclination and Intensity from the IGRF model. (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html) Parameters ---------- input_list : list with format [Date, Altitude, Latitude, Longitude] date must be in decimal year format XXXX.XXXX (Common Era) mod : desired model "" : Use the IGRF custom : use values supplied in ghfile or choose from this list ['arch3k','cals3k','pfm9k','hfm10k','cals10k.2','cals10k.1b'] where: arch3k (Korte et al., 2009) cals3k (Korte and Constable, 2011) cals10k.1b (Korte et al., 2011) pfm9k (Nilsson et al., 2014) hfm10k is the hfm.OL1.A1 of Constable et al. (2016) cals10k.2 (Constable et al., 2016) the first four of these models, are constrained to agree with gufm1 (Jackson et al., 2000) for the past four centuries gh : path to file with l m g h data Returns ------- igrf_array : array of IGRF values (0: dec; 1: inc; 2: intensity (in nT)) Examples -------- >>> local_field = ipmag.igrf([2013.6544, .052, 37.87, -122.27]) >>> local_field array([ 1.39489916e+01, 6.13532008e+01, 4.87452644e+04]) >>> ipmag.igrf_print(local_field) Declination: 13.949 Inclination: 61.353 Intensity: 48745.264 nT
We will calculate the field for San Diego from 3000 BCE to 1950 in 50 year increments using the hfm.OL1.A1 model of Constable et al. (2016, doi: 10.1016/j.epsl.2016.08.015).
# make a list of desired dates
dates=range(-3000,1950,50) # list of dates in +/- Common Era
mod = 'hfm10k' # choose the desired model
lat,lon,alt=33,-117,0 # desired latitude, longitude and alitude
Vecs=[] # list for Dec,Inc,Int outputs
for date in dates: # step through the dates
Vecs.append(ipmag.igrf([date,alt,lat,lon],mod=mod)) # append to list
vector_df = pd.DataFrame(Vecs) # make it into a Pandas dataframe
vector_df.columns=['dec','inc','int']
vector_df['vadms']=pmag.b_vdm(vector_df.int.values*1e-9, lat) # calculate the VADMs
vector_df['dec_adj']=vector_df['dec']
vector_df.loc[vector_df.dec>180,['dec_adj']]=vector_df.dec-360 # adjust declinations to be -180 => 180
fig=plt.figure(1,figsize=(7,9)) # set up the figure
fig.add_subplot(411) # make 4 rows of plots, this is the first
plt.plot(dates,vector_df.dec_adj) # plot the adjusted declinations
plt.ylabel('Declination ($^{\circ}$)')
plt.title('Geomagnetic field evaluated at Lat: '+str(lat)+' / Lon: '+str(lon))
fig.add_subplot(412) # this is the second
plt.plot(dates,vector_df.inc) # plot the inclinations
plt.ylabel('Inclination ($^{\circ}$)')
fig.add_subplot(413)
plt.plot(dates,vector_df.int*1e-3) # plot the intensites (in uT instead of nT)
plt.ylabel('Intensity ($\mu$T)')
fig.add_subplot(414) # plot the VADMs
plt.plot(dates,vector_df.vadms*1e-21) # plot as ZAm^2
plt.ylabel('VADM (ZAm$^2$)')
plt.xlabel('Dates (CE)');
[Essentials Chapter 11] [command line version]
You can't get a meaningful average inclination from inclination only data because of the exponential relationship between inclinations and the true mean inclination for Fisher distributions (except exactly at the pole and the equator). So, McFadden and Reid (1982, doi: 10.1111/j.1365-246X.1982.tb04950.x) developed a maximum liklihood estimate for getting an estimate for true mean absent declination. incfish.py is an implementation of that concept. It calls pmag.doincfish() so that is what we will do.
help(pmag.doincfish)
Help on function doincfish in module pmagpy.pmag: doincfish(inc) gets fisher mean inc from inc only data input: list of inclination values output: dictionary of 'n' : number of inclination values supplied 'ginc' : gaussian mean of inclinations 'inc' : estimated Fisher mean 'r' : estimated Fisher R value 'k' : estimated Fisher kappa 'alpha95' : estimated fisher alpha_95 'csd' : estimated circular standard deviation
incs=np.loadtxt('data_files/incfish/incfish_example_inc.dat')
pmag.doincfish(incs)
{'n': 100, 'ginc': 57.135000000000005, 'inc': 61.024999999999764, 'r': 92.8908144677846, 'k': 13.925645849497057, 'alpha95': 0.9966295962964244, 'csd': 21.70587740469687}
import matplotlib
from programs.irm_unmix import dataFit, fit_plots
fitResult = dataFit(filePath='data_files/irm_unmix/irm_unmix_example.dat',fitNumber=3)
xfit=fitResult.fitDf['field']
xraw=fitResult.rawDf['field_log']
yfit=fitResult.pdf_best
yraw=fitResult.rawDf['rem_grad_norm']
fig = plt.figure(1, figsize=(5, 5))
ax = fig.add_subplot(111)
fit_plots(ax,xfit,xraw,yfit,yraw)
[Essentials Chapter 8] [command line version]
Someone (Saiko Sugisaki) measured a number of samples from IODP Expedition 318 Hole U1359A for IRM acquisition curves. These were converted to the MagIC measurements format and saved in ../irmaq_magic/measurements.txt.
This program reads in a MagIC data model 3 file with IRM acquisition data and plots it by calling pmagplotlib.plot_mag() with options to plot by entire data file, site, sample or individual specimen. We can do that too! All we need to know is the method_code for IRM acquisition (which I do), and to propogate specimen => sample => site identities if any other plotting option besides "entire file" or by specimen is desired.
plt.clf()
help(pmagplotlib.plot_mag)
Help on function plot_mag in module pmagpy.pmagplotlib: plot_mag(fignum, datablock, s, num, units, norm) plots magnetization against (de)magnetizing temperature or field Parameters _________________ fignum : matplotlib figure number for plotting datablock : nested list of [step, 0, 0, magnetization, 1,quality] s : string for title num : no idea - set it to zero units : [T,K,U] for tesla, kelvin or arbitrary norm : [True,False] if True, normalize Effects ______ plots figure
<Figure size 432x288 with 0 Axes>
# make the figure
plt.figure(1,(5,5))
#read in the data
data=pd.read_csv('data_files/irmaq_magic/measurements.txt',sep='\t',header=1)
# fish out the IRM data
data=data[data.method_codes.str.contains('LP-IRM')] #
data['zero']=0 # make a dummy field initialized with zero
data['one']=1 # make a dummy field initialized with one
# make the required list
# possible intensity fields are:
#['magn_moment', 'magn_volume', 'magn_mass', 'magnitude']
# this data file has magn_moment data
# pmagplotlib.plotMT plots data by specimen, so get list of specimens
specimens=data.specimen.unique()
for specimen in specimens: # step through one by one
spec_df=data[data.specimen==specimen] # get data for this specimen
# make the data block required
datablock=np.array(spec_df[['treat_dc_field','zero','zero','magn_moment','one','quality']]).tolist()
pmagplotlib.plot_mag(1,datablock,'Example',0,'T',1)
[Essentials Chapter 11]
[Essentials Appendix C]
[MagIC Database]
[command line version]
This program makes equal area projections site by site along with the Fisher confidence ellipses using the McFadden and McElhinny (1988, doi: 10.1016/0012-821X(88)90072-6) method for combining lines and planes. Options are to plot in specimen, geographic or tilt corrected coordinate systems (although the specimen coordinate system is a bit silly if the specimens were not mutually oriented and the geographic and tilt correctioed would presumably be identical except for a coherent rotation of the site.) It also builds in filters for MAD or $\alpha_{95}$ cutoffs at the specimen level.
After filtering, the site level data are processed by pmag.dolnp() which calculates the MM88 statistics. These, along with the data are then plotted by pmagplotlib.plot_lnp().
We can do all that from within the notebook, using the wonders of Pandas.
help(pmagplotlib.plot_lnp)
Help on function plot_lnp in module pmagpy.pmagplotlib: plot_lnp(fignum, s, datablock, fpars, direction_type_key) plots lines and planes on a great circle with alpha 95 and mean Parameters _________ fignum : number of plt.figure() object datablock : nested list of dictionaries with keys in 3.0 or 2.5 format 3.0 keys: dir_dec, dir_inc, dir_tilt_correction = [-1,0,100], direction_type_key =['p','l'] 2.5 keys: dec, inc, tilt_correction = [-1,0,100],direction_type_key =['p','l'] fpars : Fisher parameters calculated by, e.g., pmag.dolnp() or pmag.dolnp3_0() direction_type_key : key for dictionary direction_type ('specimen_direction_type') Effects _______ plots the site level figure
# read in specimen data
spec_df=pd.read_csv('data_files/lnp_magic/specimens.txt',sep='\t',header=1)
# filter for quality = 'g'
if 'quality' in spec_df.columns:
spec_df=spec_df[spec_df.quality=='g']
spec_df.head()
specimen | sample | experiments | dir_dec | dir_inc | dir_n_measurements | dir_tilt_correction | dir_mad_free | geologic_classes | geologic_types | lithologies | meas_step_max | meas_step_min | meas_step_unit | description | int_corr | citations | method_codes | specimen_direction_type | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | sv01a1 | sv01a | NaN | NaN | NaN | NaN | NaN | NaN | Igneous | Lava Flow | Basalt | NaN | NaN | NaN | NaN | NaN | This study | NaN | NaN |
1 | sv01a1 | sv01a | sv01a1 : LP-DIR-AF | 348.4 | -34.7 | 12.0 | 0.0 | 5.1 | NaN | NaN | NaN | 0.14 | 0.005 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
2 | sv01b1 | sv01b | NaN | NaN | NaN | NaN | NaN | NaN | Igneous | Lava Flow | Basalt | NaN | NaN | NaN | NaN | NaN | This study | NaN | NaN |
3 | sv01b1 | sv01b | sv01b1 : LP-DIR-AF | 122.7 | 25.5 | 15.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.18 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
4 | sv01g1 | sv01g | NaN | NaN | NaN | NaN | NaN | NaN | Igneous | Lava Flow | Basalt | NaN | NaN | NaN | NaN | NaN | This study | NaN | NaN |
Of course, this being a data file conerted from data model 2.5 there are several lines per specimen. we want the non-blank dir_dec info with the desired (0) tilt correction
spec_df=spec_df.dropna(subset=['dir_dec','dir_inc','dir_tilt_correction'])
spec_df=spec_df[spec_df.dir_tilt_correction==0]
spec_df.head()
specimen | sample | experiments | dir_dec | dir_inc | dir_n_measurements | dir_tilt_correction | dir_mad_free | geologic_classes | geologic_types | lithologies | meas_step_max | meas_step_min | meas_step_unit | description | int_corr | citations | method_codes | specimen_direction_type | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | sv01a1 | sv01a | sv01a1 : LP-DIR-AF | 348.4 | -34.7 | 12.0 | 0.0 | 5.1 | NaN | NaN | NaN | 0.14 | 0.005 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
3 | sv01b1 | sv01b | sv01b1 : LP-DIR-AF | 122.7 | 25.5 | 15.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.18 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
5 | sv01g1 | sv01g | sv01g1 : LP-DIR-AF | 162.4 | 36.6 | 13.0 | 0.0 | 3.0 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
7 | sv01h1 | sv01h | sv01h1 : LP-DIR-AF | 190.4 | 34.6 | 13.0 | 0.0 | 5.0 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
9 | sv01j1 | sv01j | sv01j1 : LP-DIR-AF | 133.4 | 21.9 | 8.0 | 0.0 | 5.4 | NaN | NaN | NaN | 0.18 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p |
Let's proceed this way:
# read in samples table in order to pair site name to specimen data
samp_df=pd.read_csv('data_files/lnp_magic/samples.txt',sep='\t',header=1)
samp_df.head()
sample | site | specimens | dir_dec | dir_inc | dir_n_specimens | dir_n_specimens_lines | dir_n_specimens_planes | dir_tilt_correction | lat | ... | description | azimuth | dip | citations | method_codes | location | sample_direction_type | age | sample_inferred_age_sigma | sample_inferred_age_unit | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | sv01a | sv01 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 35.3432 | ... | NaN | -19.0 | -45.0 | This study | SO-MAG : FS-FD : SO-POM : SO-CMD-NORTH | San Francisco Volcanics | NaN | NaN | NaN | NaN |
1 | sv01a | sv01 | sv01a1 | 348.4 | -34.7 | 1.0 | 0.0 | 1.0 | 0.0 | NaN | ... | sample direction. | NaN | NaN | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | San Francisco Volcanics | p | 2.5 | 2.5 | Ma |
2 | sv01b | sv01 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 35.3432 | ... | NaN | -51.0 | -23.0 | This study | SO-MAG : FS-FD : SO-POM : SO-CMD-NORTH | San Francisco Volcanics | NaN | NaN | NaN | NaN |
3 | sv01b | sv01 | sv01b1 | 122.7 | 25.5 | 1.0 | 0.0 | 1.0 | 0.0 | NaN | ... | sample direction. | NaN | NaN | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | San Francisco Volcanics | p | 2.5 | 2.5 | Ma |
4 | sv01c | sv01 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 35.3432 | ... | NaN | 284.0 | -51.0 | This study | SO-MAG : FS-FD : SO-POM | San Francisco Volcanics | NaN | NaN | NaN | NaN |
5 rows × 25 columns
Of course there are duplicate sample records, so let's drop the blank lat rows (to make sure we have all the blank specimens rows, then make the data frame with just 'sample' and site' columns. Then we can merge it with the spec_df dataframe.
samp_df=samp_df.dropna(subset=['specimens'])
samp_df=samp_df[['sample','site']]
spec_df=pd.merge(spec_df,samp_df,on='sample')
spec_df
specimen | sample | experiments | dir_dec | dir_inc | dir_n_measurements | dir_tilt_correction | dir_mad_free | geologic_classes | geologic_types | lithologies | meas_step_max | meas_step_min | meas_step_unit | description | int_corr | citations | method_codes | specimen_direction_type | site | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | sv01a1 | sv01a | sv01a1 : LP-DIR-AF | 348.4 | -34.7 | 12.0 | 0.0 | 5.1 | NaN | NaN | NaN | 0.14 | 0.005 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv01 |
1 | sv01b1 | sv01b | sv01b1 : LP-DIR-AF | 122.7 | 25.5 | 15.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.18 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv01 |
2 | sv01g1 | sv01g | sv01g1 : LP-DIR-AF | 162.4 | 36.6 | 13.0 | 0.0 | 3.0 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv01 |
3 | sv01h1 | sv01h | sv01h1 : LP-DIR-AF | 190.4 | 34.6 | 13.0 | 0.0 | 5.0 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv01 |
4 | sv01j1 | sv01j | sv01j1 : LP-DIR-AF | 133.4 | 21.9 | 8.0 | 0.0 | 5.4 | NaN | NaN | NaN | 0.18 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv01 |
5 | sv02a1 | sv02a | sv02a1 : LP-DIR-AF | 330.3 | 30.4 | 6.0 | 0.0 | 2.5 | NaN | NaN | NaN | 0.18 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv02 |
6 | sv02b1 | sv02b | sv02b1 : LP-DIR-AF | 225.7 | 37.0 | 14.0 | 0.0 | 3.7 | NaN | NaN | NaN | 0.18 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv02 |
7 | sv02c1 | sv02c | sv02c1 : LP-DIR-AF | 103.1 | 32.3 | 11.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv02 |
8 | sv02d1 | sv02d | sv02d1 : LP-DIR-AF | 338.6 | 44.5 | 11.0 | 0.0 | 1.3 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv02 |
9 | sv02e1 | sv02e | sv02e1 : LP-DIR-AF | 340.6 | 42.4 | 10.0 | 0.0 | 0.9 | NaN | NaN | NaN | 0.18 | 0.040 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv02 |
10 | sv02k1 | sv02k | sv02k1 : LP-DIR-AF | 337.7 | 40.3 | 12.0 | 0.0 | 2.6 | NaN | NaN | NaN | 0.18 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv02 |
11 | sv03a1 | sv03a | sv03a1 : LP-DIR-AF | 344.6 | 52.1 | 4.0 | 0.0 | 1.3 | NaN | NaN | NaN | 0.15 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv03 |
12 | sv03b1 | sv03b | sv03b1 : LP-DIR-AF | 39.5 | -24.5 | 6.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.18 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv03 |
13 | sv03d1 | sv03d | sv03d1 : LP-DIR-AF | 3.9 | -33.5 | 11.0 | 0.0 | 4.6 | NaN | NaN | NaN | 0.18 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv03 |
14 | sv03e1 | sv03e | sv03e1 : LP-DIR-AF | 8.8 | -33.8 | 13.0 | 0.0 | 2.1 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFP : DA-DIR-GEO | p | sv03 |
15 | sv03g1 | sv03g | sv03g1 : LP-DIR-AF | 352.8 | 52.2 | 7.0 | 0.0 | 0.9 | NaN | NaN | NaN | 0.18 | 0.070 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv03 |
16 | sv03h1 | sv03h | sv03h1 : LP-DIR-AF | 349.1 | 55.5 | 8.0 | 0.0 | 0.9 | NaN | NaN | NaN | 0.18 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv03 |
17 | sv03i1 | sv03i | sv03i1 : LP-DIR-AF | 353.2 | 57.3 | 8.0 | 0.0 | 0.7 | NaN | NaN | NaN | 0.18 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv03 |
18 | sv03j1 | sv03j | sv03j1 : LP-DIR-AF | 324.1 | 44.6 | 8.0 | 0.0 | 1.9 | NaN | NaN | NaN | 0.18 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv03 |
19 | sv04a1 | sv04a | sv04a1 : LP-DIR-AF | 348.3 | 50.2 | 12.0 | 0.0 | 1.5 | NaN | NaN | NaN | 0.18 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv04 |
20 | sv04b1 | sv04b | sv04b1 : LP-DIR-AF | 353.0 | 46.6 | 12.0 | 0.0 | 1.3 | NaN | NaN | NaN | 0.18 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv04 |
21 | sv04d1 | sv04d | sv04d1 : LP-DIR-AF | 342.6 | 48.7 | 9.0 | 0.0 | 2.6 | NaN | NaN | NaN | 0.18 | 0.050 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv04 |
22 | sv04e1 | sv04e | sv04e1 : LP-DIR-AF | 342.2 | 56.9 | 11.0 | 0.0 | 1.2 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv04 |
23 | sv04i1 | sv04i | sv04i1 : LP-DIR-AF | 347.4 | 50.6 | 5.0 | 0.0 | 2.4 | NaN | NaN | NaN | 0.15 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv04 |
24 | sv05a2 | sv05a | sv05a2 : LP-DIR-AF | 202.3 | 31.5 | 14.0 | 0.0 | 4.9 | NaN | NaN | NaN | 423.00 | 0.030 | K | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : LP-DIR-T : SO-SUN : DE-BFP : DA-DI... | p | sv05 |
25 | sv05b1 | sv05b | sv05b1 : LP-DIR-AF | 160.0 | -47.8 | 6.0 | 0.0 | 1.7 | NaN | NaN | NaN | 0.18 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv05 |
26 | sv05c1 | sv05c | sv05c1 : LP-DIR-AF | 168.4 | 37.3 | 6.0 | 0.0 | 3.2 | NaN | NaN | NaN | 0.18 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv05 |
27 | sv05d2 | sv05d | sv05d2 : LP-DIR-AF | 161.2 | -47.4 | 5.0 | 0.0 | 3.3 | NaN | NaN | NaN | 0.18 | 0.080 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv05 |
28 | sv05f1 | sv05f | sv05f1 : LP-DIR-AF | 72.5 | -2.6 | 13.0 | 0.0 | 4.4 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv05 |
29 | sv05h1 | sv05h | sv05h1 : LP-DIR-AF | 172.0 | -44.3 | 7.0 | 0.0 | 4.3 | NaN | NaN | NaN | 0.15 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv05 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
159 | sv58g1 | sv58g | sv58g1 : LP-DIR-AF | 182.5 | -60.0 | 11.0 | 0.0 | 0.5 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv58 |
160 | sv58k1 | sv58k | sv58k1 : LP-DIR-AF | 188.0 | -65.2 | 12.0 | 0.0 | 0.7 | NaN | NaN | NaN | 0.18 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv58 |
161 | sv59a1 | sv59a | sv59a1 : LP-DIR-AF | 181.0 | -51.7 | 11.0 | 0.0 | 1.0 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv59 |
162 | sv59f1 | sv59f | sv59f1 : LP-DIR-AF | 179.6 | -53.1 | 11.0 | 0.0 | 0.8 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv59 |
163 | sv59g1 | sv59g | sv59g1 : LP-DIR-AF | 178.4 | -52.0 | 11.0 | 0.0 | 1.0 | NaN | NaN | NaN | 0.18 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv59 |
164 | sv59j1 | sv59j | sv59j1 : LP-DIR-AF | 176.4 | -48.3 | 12.0 | 0.0 | 1.4 | NaN | NaN | NaN | 0.18 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv59 |
165 | sv60a1 | sv60a | sv60a1 : LP-DIR-AF | 219.1 | -21.0 | 7.0 | 0.0 | 2.3 | NaN | NaN | NaN | 0.08 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv60 |
166 | sv60c1 | sv60c | sv60c1 : LP-DIR-AF | 179.5 | -28.9 | 5.0 | 0.0 | 0.7 | NaN | NaN | NaN | 0.06 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv60 |
167 | sv61a1 | sv61a | sv61a1 : LP-DIR-AF | 163.3 | 61.0 | 10.0 | 0.0 | 1.0 | NaN | NaN | NaN | 0.18 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv61 |
168 | sv61c1 | sv61c | sv61c1 : LP-DIR-AF | 264.2 | 2.6 | 9.0 | 0.0 | 4.8 | NaN | NaN | NaN | 0.08 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv61 |
169 | sv61g1 | sv61g | sv61g1 : LP-DIR-AF | 348.2 | 63.4 | 7.0 | 0.0 | 1.4 | NaN | NaN | NaN | 0.16 | 0.060 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv61 |
170 | sv61i1 | sv61i | sv61i1 : LP-DIR-AF | 356.8 | 54.4 | 5.0 | 0.0 | 1.6 | NaN | NaN | NaN | 0.14 | 0.070 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv61 |
171 | sv62b1 | sv62b | sv62b1 : LP-DIR-AF | 68.8 | 19.6 | 9.0 | 0.0 | 2.2 | NaN | NaN | NaN | 0.08 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv62 |
172 | sv62d1 | sv62d | sv62d1 : LP-DIR-AF | 345.3 | -35.3 | 11.0 | 0.0 | 2.3 | NaN | NaN | NaN | 0.12 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv62 |
173 | sv62g1 | sv62g | sv62g1 : LP-DIR-AF | 127.1 | 35.1 | 7.0 | 0.0 | 3.8 | NaN | NaN | NaN | 0.07 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv62 |
174 | sv63a1 | sv63a | sv63a1 : LP-DIR-AF | 354.9 | 43.6 | 8.0 | 0.0 | 1.3 | NaN | NaN | NaN | 0.12 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv63 |
175 | sv63f1 | sv63f | sv63f1 : LP-DIR-AF | 353.0 | 48.0 | 5.0 | 0.0 | 0.8 | NaN | NaN | NaN | 0.05 | 0.010 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv63 |
176 | sv63h1 | sv63h | sv63h1 : LP-DIR-AF | 351.9 | 46.4 | 6.0 | 0.0 | 0.9 | NaN | NaN | NaN | 0.08 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv63 |
177 | sv63j1 | sv63j | sv63j1 : LP-DIR-AF | 346.5 | 42.3 | 6.0 | 0.0 | 2.0 | NaN | NaN | NaN | 0.08 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv63 |
178 | sv64b1 | sv64b | sv64b1 : LP-DIR-AF | 173.1 | -51.1 | 8.0 | 0.0 | 1.2 | NaN | NaN | NaN | 0.15 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
179 | sv64c2 | sv64c | sv64c2 : LP-DIR-AF | 185.3 | -45.3 | 10.0 | 0.0 | 1.2 | NaN | NaN | NaN | 0.18 | 0.040 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
180 | sv64e1 | sv64e | sv64e1 : LP-DIR-AF | 162.4 | -53.0 | 11.0 | 0.0 | 0.6 | NaN | NaN | NaN | 0.16 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
181 | sv64g1 | sv64g | sv64g1 : LP-DIR-AF | 167.9 | -53.7 | 8.0 | 0.0 | 0.8 | NaN | NaN | NaN | 0.12 | 0.030 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
182 | sv64h1 | sv64h | sv64h1 : LP-DIR-AF | 165.5 | -52.9 | 8.0 | 0.0 | 0.8 | NaN | NaN | NaN | 0.15 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
183 | sv64i1 | sv64i | sv64i1 : LP-DIR-AF | 165.9 | -52.1 | 10.0 | 0.0 | 0.4 | NaN | NaN | NaN | 0.14 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFL : DA-DIR-GEO | l | sv64 |
184 | sv65a1 | sv65a | sv65a1 : LP-DIR-AF | 32.7 | 65.0 | 6.0 | 0.0 | 0.8 | NaN | NaN | NaN | 0.07 | 0.020 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv65 |
185 | sv65c1 | sv65c | sv65c1 : LP-DIR-AF | 33.1 | 49.7 | 5.0 | 0.0 | 1.0 | NaN | NaN | NaN | 0.10 | 0.050 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv65 |
186 | sv65d1 | sv65d | sv65d1 : LP-DIR-AF | 25.9 | 45.0 | 6.0 | 0.0 | 2.5 | NaN | NaN | NaN | 0.10 | 0.040 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv65 |
187 | sv65e1 | sv65e | sv65e1 : LP-DIR-AF | 29.9 | 58.3 | 5.0 | 0.0 | 1.3 | NaN | NaN | NaN | 0.08 | 0.040 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-CMD-NORTH : DE-BFL : DA-DIR-GEO | l | sv65 |
188 | sv65g1 | sv65g | sv65g1 : LP-DIR-AF | 289.8 | 10.4 | 10.0 | 0.0 | 5.0 | NaN | NaN | NaN | 0.15 | 0.000 | T | Recalculated from original measurements; super... | u | This study | LP-DIR-AF : SO-SUN : DE-BFP : DA-DIR-GEO | p | sv65 |
189 rows × 20 columns
# get the site names
sites=spec_df.site.unique()
sites
array(['sv01', 'sv02', 'sv03', 'sv04', 'sv05', 'sv06', 'sv07', 'sv08', 'sv09', 'sv10', 'sv11', 'sv12', 'sv15', 'sv16', 'sv17', 'sv18', 'sv19', 'sv20', 'sv21', 'sv22', 'sv23', 'sv24', 'sv25', 'sv26', 'sv27', 'sv28', 'sv30', 'sv31', 'sv32', 'sv50', 'sv51', 'sv52', 'sv53', 'sv54', 'sv55', 'sv56', 'sv57', 'sv58', 'sv59', 'sv60', 'sv61', 'sv62', 'sv63', 'sv64', 'sv65'], dtype=object)
Let's plot up the first 10 or so.
help(pmag.dolnp)
Help on function dolnp in module pmagpy.pmag: dolnp(data, direction_type_key) Returns fisher mean, a95 for data using method of Mcfadden and Mcelhinny '88 for lines and planes Parameters __________ Data : nested list of dictionaries with keys Data model 3.0: dir_dec dir_inc dir_tilt_correction method_codes Data model 2.5: dec inc tilt_correction magic_method_codes direction_type_key : ['specimen_direction_type'] Returns ------- ReturnData : dictionary with keys dec : fisher mean dec of data in Data inc : fisher mean inc of data in Data n_lines : number of directed lines [method_code = DE-BFL or DE-FM] n_planes : number of best fit planes [method_code = DE-BFP] alpha95 : fisher confidence circle from Data R : fisher R value of Data K : fisher k value of Data Effects prints to screen in case of no data
help(pmagplotlib.plot_lnp)
Help on function plot_lnp in module pmagpy.pmagplotlib: plot_lnp(fignum, s, datablock, fpars, direction_type_key) plots lines and planes on a great circle with alpha 95 and mean Parameters _________ fignum : number of plt.figure() object datablock : nested list of dictionaries with keys in 3.0 or 2.5 format 3.0 keys: dir_dec, dir_inc, dir_tilt_correction = [-1,0,100], direction_type_key =['p','l'] 2.5 keys: dec, inc, tilt_correction = [-1,0,100],direction_type_key =['p','l'] fpars : Fisher parameters calculated by, e.g., pmag.dolnp() or pmag.dolnp3_0() direction_type_key : key for dictionary direction_type ('specimen_direction_type') Effects _______ plots the site level figure
cnt=1
for site in sites[0:10]:
pmagplotlib.plot_init(cnt, 5, 5)
site_data=spec_df[spec_df.site==site].to_dict('records')
fpars=pmag.dolnp(site_data,'specimen_direction_type')
pmagplotlib.plot_lnp(cnt,site,site_data,fpars,'specimen_direction_type')
cnt+=1
[Essentials Chapter 2] [command line version]
This program generates a Lowes (1974, doi: 10.1111/j.1365-246X.1974.tb00622.x) spectrum from igrf-like field models. It will take a specified date, get the gauss coefficients from pmag.doigrf(), unpack them into a usable format with pmag.unpack() and calculate the spectrum with pmag.lowes().
help(pmag.unpack)
Help on function unpack in module pmagpy.pmag: unpack(gh) unpacks gh list into l m g h type list Parameters _________ gh : list of gauss coefficients (as returned by, e.g., doigrf) Returns data : nested list of [[l,m,g,h],...]
help(pmag.lowes)
Help on function lowes in module pmagpy.pmag: lowes(data) gets Lowe's power spectrum from gauss coefficients Parameters _________ data : nested list of [[l,m,g,h],...] as from pmag.unpack() Returns _______ Ls : list of degrees (l) Rs : power at degree l
So let's do it!
date=1956 # pick a date and what better one than my birth year?
coeffs=pmag.doigrf(0,0,0,date,coeffs=1) # get the gauss coefficients
data=pmag.unpack(coeffs) # unpack them into the form that lowes likes
Ls,Rs=pmag.lowes(data) # get the power spectrum
plt.plot(Ls,Rs,linewidth=2,label=str(date)) # make the plot
plt.semilogy() # semi log it
plt.xlabel('Degree (l)')
plt.ylabel('Power ($\mu$T$^2$)')
plt.legend();
[Essentials Chapter 8] [command line versions]
Someone (Saiko Sugisaki) subjected a number of specimens from IODP Expedition 318 Site U1361 specimens to a Lowrie (1990, doi: 10.1029/GL017i002p00159) 3-D IRM experiment (published as Tauxe et al., 2015, doi:10.1016/j.epsl.2014.12.034). lowrie makes plots of blocking temperature for the three coercivity fractions.
Both lowrie and lowrie_magic take specimen level 3D-IRM data, break them into the cartesian coordinates of the three IRM field directions and plot the different components versus demagnetizing temperature. We can do this with our powerful Pandas and matplotlib.
The relevent MagIC database method code is 'LP-IRM-3D', magnetization code is one of the usual, but in this example it is 'magn_moment' and the temperature step is the usual data model 3.0 ('treat_temp') in kelvin.
We will use pmag.dir2cart() for the heavy lifting. I also happen to know (because I wrote the original paper), that the X direction was the 1.0 tesla step, Y was 0.5 tesla and Z was .1 tesla, so we can put these in the legend.
help(pmag.dir2cart)
Help on function dir2cart in module pmagpy.pmag: dir2cart(d) Converts a list or array of vector directions in degrees (declination, inclination) to an array of the direction in cartesian coordinates (x,y,z) Parameters ---------- d : list or array of [dec,inc] or [dec,inc,intensity] Returns ------- cart : array of [x,y,z] Examples -------- >>> pmag.dir2cart([200,40,1]) array([-0.71984631, -0.26200263, 0.64278761])
# read in the data file
meas_df=pd.read_csv('data_files/lowrie_magic/measurements.txt',sep='\t',header=1)
# pick out the 3d-IRM data
meas_df=meas_df[meas_df.method_codes.str.contains('LP-IRM-3D')]
# get a list of specimen names
specimens=meas_df.specimen.unique()
cnt=1 # set figure counter
for specimen in specimens[0:10]: # step through first 10
spec_df=meas_df[meas_df.specimen==specimen] # collect this specimen's data
dirs=np.array(spec_df[['dir_dec','dir_inc','magn_moment']])
norm=dirs[0][2] # let's normalize to the initial intensity
carts=np.absolute((pmag.dir2cart(dirs)/norm)).transpose() # get the X,Y,Z data
temps=spec_df['treat_temp']-273 # convert to Celcius
plt.figure(cnt,(6,6))
plt.plot(temps,carts[0],'ro',label='1 T')
plt.plot(temps,carts[0],'r-')
plt.plot(temps,carts[1],'cs',label='0.5 T')
plt.plot(temps,carts[1],'c-')
plt.plot(temps,carts[2],'k^',label='0.1 T')
plt.plot(temps,carts[2],'k-')
plt.title(specimen+' : Lowrie 3-D IRM')
plt.legend();
cnt+=1
[Essentials Chapter 11] [command line version]
pca calculates best-fit lines, planes or Fisher means through selected treatment steps along with Kirschvink (1980, doi: 10.1111/j.1365-246X.1980.tb02601.x) MAD values. The file format is a simple space delimited file with specimen name, treatment step, intensity, declination and inclination. pca calls pmag.domean(), so that is what we will do here.
help(pmag.domean)
Help on function domean in module pmagpy.pmag: domean(data, start, end, calculation_type) Gets average direction using Fisher or principal component analysis (line or plane) methods Parameters ---------- data : nest list of data: [[treatment,dec,inc,int,quality],...] start : step being used as start of fit (often temperature minimum) end : step being used as end of fit (often temperature maximum) calculation_type : string describing type of calculation to be made 'DE-BFL' (line), 'DE-BFL-A' (line-anchored), 'DE-BFL-O' (line-with-origin), 'DE-BFP' (plane), 'DE-FM' (Fisher mean) Returns ------- mpars : dictionary with the keys "specimen_n","measurement_step_min", "measurement_step_max","specimen_mad","specimen_dec","specimen_inc"
# read in data as space delimited file
data=pd.read_csv('data_files/pca/pca_example.txt',\
delim_whitespace=True,header=None)
# we need to add a column for quality
data['quality']='g'
# strip off the specimen name and reorder records
# from: int,dec,inc to: dec,inc,int
data=data[[1,3,4,2,'quality']].values.tolist()
pmag.domean(data,1,10,'DE-BFL')
{'calculation_type': 'DE-BFL', 'center_of_mass': [1.9347888195464598e-05, -2.1736620227095438e-05, 2.5042313896882542e-05], 'specimen_direction_type': 'l', 'specimen_dec': 334.9058336155927, 'specimen_inc': 51.50973235790523, 'specimen_mad': 8.75370050160012, 'specimen_n': 10, 'specimen_dang': 19.257783100769142, 'measurement_step_min': 2.5, 'measurement_step_max': 70.0}
This can be done directly with matplotlib.
This program reads in a data file, sorts it and plots the data as a cumulative distribution function (using pmagplotlib.plot_cdf(). But we can do this directly from within the notebook without much fuss. And for plot_2cdfs, just do this twice.
# read the data in
data=np.loadtxt('data_files/plot_cdf/gaussian.out')
# sort the data
x=np.sort(data)
# create a y array
y=np.linspace(0,1,data.shape[0])
plt.plot(x,y,'r-')
# label
plt.xlabel('Data')
plt.ylabel('Cumulative Distribution');
Geomagia is a database specially designed for archaeomagnetic and volcanic data for the last 50 kyr with a friendly search interface. plot_geomagia is meant to plot data from files downloaded from the geomagia website: http://geomagia.gfz-potsdam.de/geomagiav3/AAquery.php. We can do this within the notebook. The example used here was for Sicily so if we felt like it, we could combine it with the ipmag.igrf() using one of the data models (which are in large part based on data in the geomagia database.
Here we want to plot inclination as a function of age.
geomagia=pd.read_csv('data_files/geomagia/geomagia_sel.txt',header=1)
geomagia.head()
Age[yr.AD] | Sigma-ve[yr.] | Sigma+ve[yr.] | SigmaAgeID | N_Ba | n_Ba[meas.] | n_Ba[acc.] | Ba[microT] | SigmaBa[microT] | VDM[E22_AmE2] | ... | SpecTypeID | RefID | CompilationID | UploadMonth | UploadYear | Uploader | Editor | LastEditDate | C14ID | UID | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1607 | -9999 | -9999 | 1 | -999 | -999 | -999 | -999.0 | -999.0 | -999.00 | ... | 3 | 36 | 1005 | -999 | 2009 | Fabio Donadini | Maxwell Brown | 24/07/2017 | -1 | 857 |
1 | 1610 | 10 | 10 | 1 | -999 | -999 | -999 | -999.0 | -999.0 | -999.00 | ... | 0 | 259 | 1001 | -999 | 2009 | Fabio Donadini | -999 | -999 | -1 | 5666 |
2 | 1610 | -9999 | -9999 | 1 | 1 | -999 | 5 | 42.1 | 3.2 | 6.74 | ... | 3 | 36 | 1001;1006;1010 | -999 | 2007 | Fabio Donadini | Maxwell Brown | 24/07/2017 | -1 | 858 |
3 | 1610 | 5 | 5 | 1 | 1 | -999 | 4 | 40.5 | 8.1 | -999.00 | ... | 1 | 57 | 1001;1006;1010;1011 | -999 | 2007 | Fabio Donadini | -999 | -999 | -1 | 1403 |
4 | 1614 | -9999 | -9999 | 1 | 1 | -999 | 5 | 40.8 | 3.2 | -999.00 | ... | 3 | 36 | 1001;1006;1010 | -999 | 2007 | Fabio Donadini | Maxwell Brown | 24/07/2017 | -1 | 836 |
5 rows × 46 columns
We have to 'clean' the dataset by getting rid of the records with no inclinations (-999) We can use Panda's filtering power for that:
geomagia_incs=geomagia[geomagia['Inc[deg.]']>-90]
geomagia_incs['Inc[deg.]']
0 62.8 1 60.0 2 65.1 5 65.0 6 61.5 7 61.6 9 61.4 10 57.5 11 62.2 12 57.3 13 60.7 14 62.5 15 60.4 17 61.8 18 63.3 19 64.0 20 65.4 21 64.1 22 57.7 23 54.2 24 54.2 25 56.6 26 58.3 28 56.1 29 57.9 31 53.2 32 54.8 35 55.7 37 51.5 38 52.0 42 50.5 43 49.8 44 49.8 46 51.7 48 51.3 50 50.7 51 49.0 52 51.6 53 52.3 56 50.9 59 49.0 Name: Inc[deg.], dtype: float64
plt.plot(geomagia_incs['Age[yr.AD]'],geomagia_incs['Inc[deg.]'],'ro')
plt.xlabel('Age (CE)')
plt.ylabel('Inclination');
This program was designed to make color contour maps of geomagnetic field elements drawn from various IGRF-like field models (see also igrf).
It calls pmag.do_mag_map()) to generate arrays for plotting with the pmagplotlib.plot_mag_map() function. We can do that from within this notebook. NB: The cartopy version of this is still a bit buggy and functions best with the PlateCarree projection.
help(pmag.do_mag_map)
Help on function do_mag_map in module pmagpy.pmag: do_mag_map(date, lon_0=0, alt=0, file='', mod='cals10k', resolution='low') returns lists of declination, inclination and intensities for lat/lon grid for desired model and date. Parameters: _________________ date = Required date in decimal years (Common Era, negative for Before Common Era) Optional Parameters: ______________ mod = model to use ('arch3k','cals3k','pfm9k','hfm10k','cals10k.2','shadif14k','cals10k.1b','custom') file = l m g h formatted filefor custom model lon_0 : central longitude for Hammer projection alt = altitude resolution = ['low','high'] default is low Returns: ______________ Bdec=list of declinations Binc=list of inclinations B = list of total field intensities in nT Br = list of radial field intensities lons = list of longitudes evaluated lats = list of latitudes evaluated
# define some useful parameters
date,mod,lon_0,alt,ghfile=1956.0725,'cals10k.2',0,0,"" # only date is required
Ds,Is,Bs,Brs,lons,lats=pmag.do_mag_map(date,mod=mod,lon_0=lon_0,alt=alt,file=ghfile)
help(ipmag.igrf)
Help on function igrf in module pmagpy.ipmag: igrf(input_list, mod='', ghfile='') Determine Declination, Inclination and Intensity from the IGRF model. (http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html) Parameters ---------- input_list : list with format [Date, Altitude, Latitude, Longitude] date must be in decimal year format XXXX.XXXX (Common Era) mod : desired model "" : Use the IGRF custom : use values supplied in ghfile or choose from this list ['arch3k','cals3k','pfm9k','hfm10k','cals10k.2','cals10k.1b'] where: arch3k (Korte et al., 2009) cals3k (Korte and Constable, 2011) cals10k.1b (Korte et al., 2011) pfm9k (Nilsson et al., 2014) hfm10k is the hfm.OL1.A1 of Constable et al. (2016) cals10k.2 (Constable et al., 2016) the first four of these models, are constrained to agree with gufm1 (Jackson et al., 2000) for the past four centuries gh : path to file with l m g h data Returns ------- igrf_array : array of IGRF values (0: dec; 1: inc; 2: intensity (in nT)) Examples -------- >>> local_field = ipmag.igrf([2013.6544, .052, 37.87, -122.27]) >>> local_field array([ 1.39489916e+01, 6.13532008e+01, 4.87452644e+04]) >>> ipmag.igrf_print(local_field) Declination: 13.949 Inclination: 61.353 Intensity: 48745.264 nT
help(pmagplotlib.plot_mag_map)
Help on function plot_mag_map in module pmagpy.pmagplotlib: plot_mag_map(fignum, element, lons, lats, element_type, cmap='coolwarm', lon_0=0, date='', contours=False, proj='PlateCarree') makes a color contour map of geomagnetic field element Parameters ____________ fignum : matplotlib figure number element : field element array from pmag.do_mag_map for plotting lons : longitude array from pmag.do_mag_map for plotting lats : latitude array from pmag.do_mag_map for plotting element_type : [B,Br,I,D] geomagnetic element type B : field intensity Br : radial field intensity I : inclinations D : declinations Optional _________ contours : plot the contour lines on top of the heat map if True proj : cartopy projection ['PlateCarree','Mollweide'] NB: The Mollweide projection can only be reliably with cartopy=0.17.0; otherwise use lon_0=0. Also, for declinations, PlateCarree is recommended. cmap : matplotlib color map - see https://matplotlib.org/examples/color/colormaps_reference.html for options lon_0 : central longitude of the Mollweide projection date : date used for field evaluation, if custom ghfile was used, supply filename Effects ______________ plots a color contour map with the desired field element
cmap='RdYlBu' # nice color map for contourf
if has_cartopy:
pmagplotlib.plot_mag_map(1,Bs,lons,lats,'B',date=date,proj='Mollweide',contours=True) # plot the field strength
pmagplotlib.plot_mag_map(2,Is,lons,lats,'I',date=date,proj='Mollweide',contours=True)# plot the inclination
pmagplotlib.plot_mag_map(3,Ds,lons,lats,'D',date=date,contours=True)# plot the declination
elif has_basemap:
pmagplotlib.plot_mag_map_basemap(1,Bs,lons,lats,'B',date=date) # plot the field strength
pmagplotlib.plot_mag_map_basemap(2,Is,lons,lats,'I',date=date)# plot the inclination
pmagplotlib.plot_mag_map_basemap(3,Ds,lons,lats,'D',date=date)# plot the declination
This program will generate a simple map of the data points read from a file (lon lat) on the desired projection. If you want to use high resolution or the etopo20 meshgrid with basemap, you must install the etopo20 data files (run install_etopo.py from the command line).
This program sets a bunch of options and calls pmagplotlib.plot_map(). Note, if Basemap is installed, you can use pmagplotlib.plot_map_basemap() instead which uses the older (but less buggy) and soon to be deprecated Basemap plotting package.
help(pmagplotlib.plot_map)
Help on function plot_map in module pmagpy.pmagplotlib: plot_map(fignum, lats, lons, Opts) makes a cartopy map with lats/lons Requires installation of cartopy Parameters: _______________ fignum : matplotlib figure number lats : array or list of latitudes lons : array or list of longitudes Opts : dictionary of plotting options: Opts.keys= proj : projection [supported cartopy projections: pc = Plate Carree aea = Albers Equal Area aeqd = Azimuthal Equidistant lcc = Lambert Conformal lcyl = Lambert Cylindrical merc = Mercator mill = Miller Cylindrical moll = Mollweide [default] ortho = Orthographic robin = Robinson sinu = Sinusoidal stere = Stereographic tmerc = Transverse Mercator utm = UTM [set zone and south keys in Opts] laea = Lambert Azimuthal Equal Area geos = Geostationary npstere = North-Polar Stereographic spstere = South-Polar Stereographic latmin : minimum latitude for plot latmax : maximum latitude for plot lonmin : minimum longitude for plot lonmax : maximum longitude lat_0 : central latitude lon_0 : central longitude sym : matplotlib symbol symsize : symbol size in pts edge : markeredgecolor cmap : matplotlib color map res : resolution [c,l,i,h] for low/crude, intermediate, high boundinglat : bounding latitude sym : matplotlib symbol for plotting symsize : matplotlib symbol size for plotting names : list of names for lats/lons (if empty, none will be plotted) pltgrd : if True, put on grid lines padlat : padding of latitudes padlon : padding of longitudes gridspace : grid line spacing global : global projection [default is True] oceancolor : 'azure' landcolor : 'bisque' [choose any of the valid color names for matplotlib see https://matplotlib.org/examples/color/named_colors.html details : dictionary with keys: coasts : if True, plot coastlines rivers : if True, plot rivers states : if True, plot states countries : if True, plot countries ocean : if True, plot ocean fancy : if True, plot etopo 20 grid NB: etopo must be installed if Opts keys not set :these are the defaults: Opts={'latmin':-90,'latmax':90,'lonmin':0,'lonmax':360,'lat_0':0,'lon_0':0,'proj':'moll','sym':'ro,'symsize':5,'edge':'black','pltgrid':1,'res':'c','boundinglat':0.,'padlon':0,'padlat':0,'gridspace':30,'details':all False,'edge':None,'cmap':'jet','fancy':0,'zone':'','south':False,'oceancolor':'azure','landcolor':'bisque'}
# read in some data:
# this is the cartopy version
data=np.loadtxt('data_files/plot_map_pts/uniform.out').transpose()
lons=data[0] # longitudes array
lats=data[1] # latitudes array
# set some options
Opts={}
Opts['sym']='bo' # sets the symbol to white dots
Opts['symsize']=3 # sets symbol size to 3 pts
Opts['proj']='robin' # Robinson projection
Opts['details']={}
Opts['details']['coasts']=True
plt.figure(1,(10,10)) # optional - make a map
if has_cartopy:
pmagplotlib.plot_map(1, lats, lons, Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1, lats, lons, Opts)
gridlines only supported for PlateCarree, Lambert Conformal, and Mercator plots currently
# read in some data:
data=np.loadtxt('data_files/plot_map_pts/uniform.out').transpose()
lons=data[0] # longitudes array
lats=data[1] # latitudes array
# set some options
Opts={}
Opts['sym']='wo' # sets the symbol to white dots
Opts['symsize']=3 # sets symbol size to 10 pts
Opts['proj']='pc' # Platecarre projection
Opts['edge']='black'
Opts['details']={}
Opts['details']['fancy']=True # this option takes a while....
if has_cartopy:
plt.figure(1,(8,8)) # optional - make a map
pmagplotlib.plot_map(1, lats, lons, Opts)
elif has_basemap: # this only works if you have basemap installed
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map_basemap(1, lats, lons, Opts)
Here's an example with a simple site location.
Opts={}
Opts['sym']='r*' # sets the symbol to white dots
Opts['symsize']=100 # sets symbol size to 3 pts
Opts['proj']='lcc' # Lambert Conformal projection
Opts['pltgrid']=True
Opts['lat_0']=33
Opts['lon_0']=260
Opts['latmin']=20
Opts['latmax']=52
Opts['lonmin']=-130
Opts['lonmax']=-70
Opts['gridspace']=10
Opts['details']={}
Opts['details']['coasts']=True
Opts['details']['ocean']=True
Opts['details']['countries']=True
Opts['global']=False
lats,lons=[33],[-117]
plt.figure(1,(10,10)) # optional - make a map
if has_cartopy:
pmagplotlib.plot_map(1, lats, lons, Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1, lats, lons, Opts)
[Essentials Chapter 11] [command line version]
plotdi_a reads in a data file with declination, inclination and $\alpha_{95}$ data in it and plots the directions along with the confidence circles.
We can use the function ipmag.plot_di_mean() for this.
help(ipmag.plot_di_mean)
Help on function plot_di_mean in module pmagpy.ipmag: plot_di_mean(dec, inc, a95, color='k', marker='o', markersize=20, label='', legend='no') Plot a mean direction (declination, inclination) with alpha_95 ellipse on an equal area plot. Before this function is called, a plot needs to be initialized with code that looks something like: >fignum = 1 >plt.figure(num=fignum,figsize=(10,10),dpi=160) >ipmag.plot_net(fignum) Required Parameters ----------- dec : declination of mean being plotted inc : inclination of mean being plotted a95 : a95 confidence ellipse of mean being plotted Optional Parameters (defaults are used if not specified) ----------- color : the default color is black. Other colors can be chosen (e.g. 'r'). marker : the default is a circle. Other symbols can be chosen (e.g. 's'). markersize : the default is 20. Other sizes can be chosen. label : the default is no label. Labels can be assigned. legend : the default is no legend ('no'). Putting 'yes' will plot a legend.
# read in some data
data=np.loadtxt('data_files/plotdi_a/plotdi_a_example.dat').transpose()
decs=data[0] # array of declinations
incs=data[1] # array of inclinations
a95s=data[2] # array of alpha95s
# make the plots
fignum=1
plt.figure(num=fignum,figsize=(3,3)) # make a figure object
ipmag.plot_net(fignum) # plot the equal area net
for pt in range(decs.shape[0]): # step through the data
ipmag.plot_di_mean(dec=decs[pt],inc=incs[pt],a95=a95s[pt],color='blue')
[Essentials Chapter 16] [command line version]
polemap_magic plots poles from a MagIC formatted locations.txt file. Alternatively, we can use ipmag.plot_vgp() for this, but substituting paleomagnetic poles for VGPs (the math is the same). We'll try this out on a set of poles downloaded from the MagIC database for the Cretaceous of Europe.
Let's try it both ways, first with ipmag.plot_vgp( ):
help(ipmag.plot_vgp)
Help on function plot_vgp in module pmagpy.ipmag: plot_vgp(map_axis, vgp_lon=None, vgp_lat=None, di_block=None, label='', color='k', marker='o', edge='black', markersize=20, legend=False) This function plots a paleomagnetic pole position on a cartopy map axis. Before this function is called, a plot needs to be initialized with code such as that in the make_orthographic_map function. Example ------- >>> vgps = ipmag.fishrot(dec=200,inc=30) >>> vgp_lon_list,vgp_lat_list,intensities= ipmag.unpack_di_block(vgps) >>> map_axis = ipmag.make_orthographic_map(central_longitude=200,central_latitude=30) >>> ipmag.plot_vgp(map_axis,vgp_lon=vgp_lon_list,vgp_lat=vgp_lat_list,color='red',markersize=40) Required Parameters ----------- map_axis : the name of the current map axis that has been developed using cartopy plon : the longitude of the paleomagnetic pole being plotted (in degrees E) plat : the latitude of the paleomagnetic pole being plotted (in degrees) Optional Parameters (defaults are used if not specified) ----------- color : the color desired for the symbol (default is 'k' aka black) marker : the marker shape desired for the pole mean symbol (default is 'o' aka a circle) edge : the color of the edge of the marker (default is black) markersize : size of the marker in pt (default is 20) label : the default is no label. Labels can be assigned. legend : the default is no legend (False). Putting True will plot a legend.
help(ipmag.make_orthographic_map)
Help on function make_orthographic_map in module pmagpy.ipmag: make_orthographic_map(central_longitude=0, central_latitude=0, figsize=(8, 8), add_land=True, land_color='tan', add_ocean=False, ocean_color='lightblue', grid_lines=True, lat_grid=[-80.0, -60.0, -30.0, 0.0, 30.0, 60.0, 80.0], lon_grid=[-180.0, -150.0, -120.0, -90.0, -60.0, -30.0, 0.0, 30.0, 60.0, 90.0, 120.0, 150.0, 180.0]) Function creates and returns an orthographic map projection using cartopy Example ------- >>> map_axis = make_orthographic_map(central_longitude=200,central_latitude=30) Optional Parameters ----------- central_longitude : central longitude of projection (default is 0) central_latitude : central latitude of projection (default is 0) figsize : size of the figure (default is 8x8) add_land : chose whether land is plotted on map (default is true) land_color : specify land color (default is 'tan') add_ocean : chose whether land is plotted on map (default is False, change to True to plot) ocean_color : specify ocean color (default is 'lightblue') grid_lines : chose whether gird lines are plotted on map (default is true) lat_grid : specify the latitude grid (default is 30 degree spacing) lon_grid : specify the longitude grid (default is 30 degree spacing)
data=pd.read_csv('data_files/polemap_magic/locations.txt',sep='\t',header=1)
lats=data['pole_lat'].values
lons=data['pole_lon'].values
if has_cartopy:
map_axis =ipmag.make_orthographic_map(central_latitude=90,figsize=(6,6),land_color='bisque')
ipmag.plot_vgp(map_axis, vgp_lon=lons, vgp_lat=lats,\
markersize=20, legend='no')
elif has_basemap:
m = Basemap(projection='ortho',lat_0=90,lon_0=0)
plt.figure(figsize=(6, 6))
m.drawcoastlines(linewidth=0.25)
m.fillcontinents(color='bisque',lake_color='white',zorder=1)
m.drawmapboundary(fill_color='white')
m.drawmeridians(np.arange(0,360,30))
m.drawparallels(np.arange(-90,90,30))
ipmag.plot_vgp_basemap(m, vgp_lon=lons, vgp_lat=lats, color='k', marker='o', \
markersize=20, legend='no')
Alternatively, you can use the function ipmag.polemap_magic.
help(ipmag.polemap_magic)
Help on function polemap_magic in module pmagpy.ipmag: polemap_magic(loc_file='locations.txt', dir_path='.', interactive=False, crd='', sym='ro', symsize=40, rsym='g^', rsymsize=40, fmt='pdf', res='c', proj='ortho', flip=False, anti=False, fancy=False, ell=False, ages=False, lat_0=90.0, lon_0=0.0, save_plots=True) Use a MagIC format locations table to plot poles. Parameters ---------- loc_file : str, default "locations.txt" dir_path : str, default "." directory name to find loc_file in (if not included in loc_file) interactive : bool, default False if True, interactively plot and display (this is best used on the command line only) crd : str, default "" coordinate system [g, t] (geographic, tilt_corrected) fmt : str, default "pdf" res : str, default "c" resolution [c, l, i, h] (crude, low, intermediate, high) proj : str, default "ortho" ortho = orthographic lcc = lambert conformal moll = molweide merc = mercator flip : bool, default False if True, flip reverse poles to normal antipode anti : bool, default False if True, plot antipodes for each pole fancy : bool, default False if True, plot topography (not yet implementedj) ell : bool, default False if True, plot ellipses ages : bool, default False if True, plot ages lat_0 : float, default 90. eyeball latitude lon_0 : float, default 0. eyeball longitude save_plots : bool, default True if True, create and save all requested plots
ipmag.polemap_magic('data_files/polemap_magic/locations.txt', save_plots=False)
(True, [])
[Essentials Chapter 16] [Essentials Appendix A.3.5] [command line version]
This program finds rotation poles for a specified location, age and destination plate, then rotates the point into the destination plate coordinates using the roations and methods described in Essentials Appendix A.3.5.
This can be done for you using the function frp.get_pole() in the finite rotation pole module called pmagpy.frp. You then call pmag.pt_rot() to do the rotation. Let's do this for to rotate the Cretaceous poles from Europe (sane data as in the polemap_magic example) and rotate them to South African coordinates.
# need to load this special module
import pmagpy.frp as frp
help(frp.get_pole)
Help on function get_pole in module pmagpy.frp: get_pole(continent, age) returns rotation poles and angles for specified continents and ages assumes fixed Africa. Parameters __________ continent : aus : Australia eur : Eurasia mad : Madacascar [nwaf,congo] : NW Africa [choose one] col : Colombia grn : Greenland nam : North America par : Paraguay eant : East Antarctica ind : India [neaf,kala] : NE Africa [choose one] [sac,sam] : South America [choose one] ib : Iberia saf : South Africa Returns _______ [pole longitude, pole latitude, rotation angle] : for the continent at specified age
Prot=frp.get_pole('eur',100)
Prot
[40.2, -12.5, 28.5]
help(pmag.pt_rot)
Help on function pt_rot in module pmagpy.pmag: pt_rot(EP, Lats, Lons) Rotates points on a globe by an Euler pole rotation using method of Cox and Hart 1986, box 7-3. Parameters ---------- EP : Euler pole list [lat,lon,angle] Lats : list of latitudes of points to be rotated Lons : list of longitudes of points to be rotated Returns _________ RLats : rotated latitudes RLons : rotated longitudes
data=pd.read_csv('data_files/polemap_magic/locations.txt',sep='\t',header=1)
lats=data['pole_lat'].values
lons=data['pole_lon'].values
RLats,RLons=rot_pts=pmag.pt_rot(Prot,lats,lons)
And now we can plot them using pmagplotlib.plot_map()
Opts={}
Opts['sym']='wo' # sets the symbol
Opts['symsize']=10
Opts['proj']='ortho'
Opts['edge']='black'
Opts['lat_0']=90
Opts['details']={}
Opts['details']['fancy']=True # warning : this option takes a few minutes
if has_cartopy:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map(1, RLats, RLons, Opts)
elif has_basemap:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map_basemap(1, RLats, RLons, Opts)
gridlines only supported for PlateCarree, Lambert Conformal, and Mercator plots currently
pmagplotlib.plot_ts( ) makes a plot of the Geomagnetic Reversal Time Scale (your choice of several) between specified age points.
help(pmagplotlib.plot_ts)
Help on function plot_ts in module pmagpy.pmagplotlib: plot_ts(ax, agemin, agemax, timescale='gts12', ylabel='Age (Ma)') Make a time scale plot between specified ages. Parameters: ------------ ax : figure object agemin : Minimum age for timescale agemax : Maximum age for timescale timescale : Time Scale [ default is Gradstein et al., (2012)] for other options see pmag.get_ts() ylabel : if set, plot as ylabel
fig=plt.figure(1,(3,12))
ax=fig.add_subplot(111)
agemin,agemax=0,10
pmagplotlib.plot_ts(ax,agemin,agemax)
[Essentials Appendix B.1.5] [command line version]
qqplot makes a quantile-quantile plot of the input data file against a normal distribution. The plot has the mean, standard deviation and the $D$ statistic as well as the $D_c$ statistic expected from a normal distribution. We can read in a data file and then call pmagplotlib.plot_qq_norm(). Let's reprise the gaussian example from before and test if the data are in fact likely to be normally distributed.
data=list(pmag.gaussdev(10,3,100))
help(pmagplotlib.plot_qq_norm)
Help on function plot_qq_norm in module pmagpy.pmagplotlib: plot_qq_norm(fignum, Y, title) makes a Quantile-Quantile plot for data Parameters _________ fignum : matplotlib figure number Y : list or array of data title : title string for plot Returns ___________ d,dc : the values for D and Dc (the critical value) if d>dc, likely to be normally distributed (95\% confidence)
D,Dc=pmagplotlib.plot_qq_norm(1,data,'')
print (D,Dc)
0.04840271472940638 0.0886
Whew! it worked this time. It will fail about 5% of the time.
This program is very much like qqplot and fishqq which plot data against a normal and Fisherian distributions respectively. In fact fishqq plots the declination values against a uniform distribution just like qqunf.
qqunf.py (the command line version) calls pmagplotlib.plot_qq_unf(). To demonstrate the functionality of qqplot, we can generate a simulated data set with random.uniform(), inspect it with a histogram and then test whether it is likely to actually have been drawn from a uniform distribution (95% confidence) using pmagplotlib.plot_qq_unf().
import numpy.random as random
uniform=random.uniform(0,100,size=100)
plt.hist(uniform,histtype='step',color='blue',density=True,facecolor='white')
(array([0.01324589, 0.01018915, 0.00917023, 0.01324589, 0.01120806, 0.00611349, 0.0071324 , 0.01222697, 0.01120806, 0.00815132]), array([ 1.29320529, 11.10757136, 20.92193744, 30.73630351, 40.55066958, 50.36503565, 60.17940172, 69.99376779, 79.80813387, 89.62249994, 99.43686601]), <a list of 1 Patch objects>)
Mu,Mu_0=pmagplotlib.plot_qq_unf(1,uniform,"",degrees=False)
from importlib import reload
reload(ipmag)
ipmag.quick_hyst("data_files/3_0/McMurdo")#, save_plots=False)
mc04c-1 1 out of 8 working on t: 273 1 saved in McMurdo_mc04c_mc04c_mc04c-1_hyst.png mc113a1-1 2 out of 8 working on t: 273 1 saved in McMurdo_mc113a_mc113a_mc113a1-1_hyst.png mc117a2-1 3 out of 8 working on t: 273 1 saved in McMurdo_mc117a_mc117a_mc117a2-1_hyst.png mc120a2-1 4 out of 8 working on t: 273 1 saved in McMurdo_mc120a_mc120a_mc120a2-1_hyst.png mc129a1-1 5 out of 8 working on t: 273 1 saved in McMurdo_mc129a_mc129a_mc129a1-1_hyst.png mc164a2-1 6 out of 8 working on t: 273 1 saved in McMurdo_mc164a_mc164a_mc164a2-1_hyst.png mc205a1-1 7 out of 8 working on t: 273 1 saved in McMurdo_mc205a_mc205a_mc205a1-1_hyst.png mc217a2-1 8 out of 8 working on t: 273 1 saved in McMurdo_mc217a_mc217a_mc217a2-1_hyst.png
(True, ['McMurdo_mc04c_mc04c_mc04c-1_hyst.png', 'McMurdo_mc113a_mc113a_mc113a1-1_hyst.png', 'McMurdo_mc117a_mc117a_mc117a2-1_hyst.png', 'McMurdo_mc120a_mc120a_mc120a2-1_hyst.png', 'McMurdo_mc129a_mc129a_mc129a1-1_hyst.png', 'McMurdo_mc164a_mc164a_mc164a2-1_hyst.png', 'McMurdo_mc205a_mc205a_mc205a1-1_hyst.png', 'McMurdo_mc217a_mc217a_mc217a2-1_hyst.png'])
[Essentials Chapter 12] [command line version]
revtest uses the boostrap reversals test described in detail in [Chapter 12] of the online text book "Essentials of Paleomagnetism". It splits the data into two polarity groups, flips the "reverse" mode to its antipodes and does the test for common_mean on the two groups. It has been implemented for notebooks as ipmag.reversal_test_bootstrap()).
help(ipmag.reversal_test_bootstrap)
Help on function reversal_test_bootstrap in module pmagpy.ipmag: reversal_test_bootstrap(dec=None, inc=None, di_block=None, plot_stereo=False, save=False, save_folder='.', fmt='svg') Conduct a reversal test using bootstrap statistics (Tauxe, 2010) to determine whether two populations of directions could be from an antipodal common mean. Parameters ---------- dec: list of declinations inc: list of inclinations or di_block: a nested list of [dec,inc] A di_block can be provided in which case it will be used instead of dec, inc lists. plot_stereo : before plotting the CDFs, plot stereonet with the bidirectionally separated data (default is False) save : boolean argument to save plots (default is False) save_folder : directory where plots will be saved (default is current directory, '.') fmt : format of saved figures (default is 'svg') Returns ------- plots : Plots of the cumulative distribution of Cartesian components are shown as is an equal area plot if plot_stereo = True Examples -------- Populations of roughly antipodal directions are developed here using ``ipmag.fishrot``. These directions are combined into a single di_block given that the function determines the principal component and splits the data accordingly by polarity. >>> directions_n = ipmag.fishrot(k=20, n=30, dec=5, inc=-60) >>> directions_r = ipmag.fishrot(k=35, n=25, dec=182, inc=57) >>> directions = directions_n + directions_r >>> ipmag.reversal_test_bootstrap(di_block=directions, plot_stereo = True) Data can also be input to the function as separate lists of dec and inc. In this example, the di_block from above is split into lists of dec and inc which are then used in the function: >>> direction_dec, direction_inc, direction_moment = ipmag.unpack_di_block(directions) >>> ipmag.reversal_test_bootstrap(dec=direction_dec,inc=direction_inc, plot_stereo = True)
di_block=np.loadtxt('data_files/revtest/revtest_example.txt')
ipmag.reversal_test_bootstrap(di_block=di_block,plot_stereo=True)
[Essentials Chapter 12] [MagIC Database] [command line version]
This is the same idea as revtest but reads in MagIC formatted data files. We will do this the Pandas way.
data=pd.read_csv('data_files/revtest_magic/sites.txt',sep='\t',header=1)
decs=data.dir_dec.values
incs=data.dir_inc.values
ipmag.reversal_test_bootstrap(dec=decs,inc=incs,plot_stereo=True)
[Essentials Chapter 13] [command line version]
This program converts the six tensor elements to eigenparameters - the inverse of eigs_s.
We can call the function pmag.doseigs() from the notebook.
help(pmag.doseigs)
Help on function doseigs in module pmagpy.pmag: doseigs(s) convert s format for eigenvalues and eigenvectors Parameters __________ s=[x11,x22,x33,x12,x23,x13] : the six tensor elements Return __________ tau : [t1,t2,t3] tau is an list of eigenvalues in decreasing order: V : [[V1_dec,V1_inc],[V2_dec,V2_inc],[V3_dec,V3_inc]] is an list of the eigenvector directions
Ss=np.loadtxt('data_files/s_eigs/s_eigs_example.dat')
for s in Ss:
tau,V=pmag.doseigs(s)
print ('%f %8.2f %8.2f %f %8.2f %8.2f %f %8.2f %8.2f'%\
(tau[2],V[2][0],V[2][1],tau[1],V[1][0],V[1][1],tau[0],V[0][0],V[0][1]))
0.331272 239.53 44.70 0.333513 126.62 21.47 0.335215 19.03 37.54 0.331779 281.12 6.18 0.332183 169.79 73.43 0.336039 12.82 15.32 0.330470 283.57 27.30 0.333283 118.37 61.91 0.336247 16.75 6.13 0.331238 261.36 12.07 0.333776 141.40 66.82 0.334986 355.70 19.48 0.330857 255.71 7.13 0.333792 130.85 77.65 0.335352 346.97 10.03 0.331759 268.51 26.79 0.334050 169.66 16.95 0.334190 51.04 57.53 0.331950 261.59 20.68 0.333133 92.18 68.99 0.334917 352.93 3.54 0.331576 281.42 21.32 0.333121 117.04 67.94 0.335303 13.54 5.41
[Essentials Chapter 13] [command line version]
s_geo takes the 6 tensor elements in specimen coordinates and applies the rotation similar to di_geo. To do this we will call pmag.dosgeo() from within the notebook.
help(pmag.dosgeo)
Help on function dosgeo in module pmagpy.pmag: dosgeo(s, az, pl) rotates matrix a to az,pl returns s Parameters __________ s : [x11,x22,x33,x12,x23,x13] - the six tensor elements az : the azimuth of the specimen X direction pl : the plunge (inclination) of the specimen X direction Return s_rot : [x11,x22,x33,x12,x23,x13] - after rotation
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
for s in Ss:
print(pmag.dosgeo(s[0:6],s[6],s[7]))
[ 3.3412680e-01 3.3282733e-01 3.3304587e-01 -1.5288725e-04 1.2484333e-03 1.3572115e-03] [3.3556300e-01 3.3198264e-01 3.3245432e-01 8.7258930e-04 2.4140846e-04 9.6166186e-04] [3.3584908e-01 3.3140627e-01 3.3274469e-01 1.3184461e-03 1.1881561e-03 2.9863901e-05] [ 0.33479756 0.3314253 0.3337772 -0.00047493 0.00049539 0.00044303] [ 3.3505613e-01 3.3114848e-01 3.3379540e-01 -1.0137478e-03 2.8535718e-04 3.4851654e-04] [ 3.3406156e-01 3.3226916e-01 3.3366925e-01 -2.2665596e-05 9.8547747e-04 5.5531069e-05] [ 3.3486596e-01 3.3216032e-01 3.3297369e-01 -3.5492037e-04 3.9253550e-04 1.5402706e-04] [3.3510646e-01 3.3196402e-01 3.3292958e-01 7.5965287e-04 5.7242444e-04 1.0112141e-04]
[Essentials Chapter 13] [command line version]
s_hext calculates Hext (1963, doi: 10.2307/2333905) statistics for anisotropy data in the six tensor element format.
It calls pmag.dohext().
help(pmag.dohext)
Help on function dohext in module pmagpy.pmag: dohext(nf, sigma, s) calculates hext parameters for nf, sigma and s Parameters __________ nf : number of degrees of freedom (measurements - 6) sigma : the sigma of the measurements s : [x11,x22,x33,x12,x23,x13] - the six tensor elements Return hpars : dictionary of Hext statistics with keys: 'F_crit' : critical value for anisotropy 'F12_crit' : critical value for tau1>tau2, tau2>3 'F' : value of F 'F12' : value of F12 'F23' : value of F23 'v1_dec': declination of principal eigenvector 'v1_inc': inclination of principal eigenvector 'v2_dec': declination of major eigenvector 'v2_inc': inclination of major eigenvector 'v3_dec': declination of minor eigenvector 'v3_inc': inclination of minor eigenvector 't1': principal eigenvalue 't2': major eigenvalue 't3': minor eigenvalue 'e12': angle of confidence ellipse of principal eigenvector in direction of major eigenvector 'e23': angle of confidence ellipse of major eigenvector in direction of minor eigenvector 'e13': angle of confidence ellipse of principal eigenvector in direction of minor eigenvector If working with data set with no sigmas and the average is desired, use nf,sigma,avs=pmag.sbar(Ss) as input
We are working with data that have no sigmas attached to them and want to average all the values in the file together. Let's look at the rotated data from the s_geo example.
# read in the data
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
# make a container for the rotated S values
SGeos=[]
for s in Ss:
SGeos.append(pmag.dosgeo(s[0:6],s[6],s[7]))
nf,sigma,avs=pmag.sbar(SGeos) # get the average over all the data
hpars=pmag.dohext(nf,sigma,avs)
print(hpars)
{'F_crit': '2.4377', 'F12_crit': '3.2199', 'F': 5.752167064666719, 'F12': 3.5510601243464004, 'F23': 3.663557566868797, 'v1_dec': 5.330894345303252, 'v1_inc': 14.682483596068828, 'v2_dec': 124.47233106679136, 'v2_inc': 61.71700837018042, 'v3_dec': 268.75792759495505, 'v3_inc': 23.599173682479822, 't1': 0.3350527, 't2': 0.33334228, 't3': 0.331605, 'e12': 25.45983619637674, 'e23': 25.114754046379378, 'e13': 13.28977437428862}
[Essentials Chapter 13] [command line version]
s_tilt takes the 6 tensor elements in geographic coordinates and applies the rotation similar to di_tilt into stratigraphic coordinates. It calls pmag.dostilt(). But be careful! s_tilt.py (the command line program) assumes that the bedding info is the strike, with the dip to the right of strike unlike pmag.dostilt which assumes that the azimuth is the dip direction.
help(pmag.dostilt)
Help on function dostilt in module pmagpy.pmag: dostilt(s, bed_az, bed_dip) Rotates "s" tensor to stratigraphic coordinates Parameters __________ s : [x11,x22,x33,x12,x23,x13] - the six tensor elements bed_az : bedding dip direction bed_dip : bedding dip Return s_rot : [x11,x22,x33,x12,x23,x13] - after rotation
# note that the data in this example are Ss and strike and dip (not bed_az,bed_pl)
Ss=np.loadtxt('data_files/s_tilt/s_tilt_example.dat')
for s in Ss:
print(pmag.dostilt(s[0:6],s[6]+90.,s[7])) # make the bedding azimuth dip direction, not strike.
[ 0.3345571 0.33192658 0.3335163 -0.00043562 0.00092779 0.00105006] [ 3.3585501e-01 3.3191565e-01 3.3222935e-01 5.5959972e-04 -5.3161417e-05 6.4731773e-04] [3.3586669e-01 3.3084923e-01 3.3328408e-01 1.4226610e-03 1.3233915e-04 9.2028757e-05] [ 3.3488664e-01 3.3138493e-01 3.3372843e-01 -5.6597008e-04 -3.9085373e-04 4.8729391e-05] [ 3.3506602e-01 3.3127019e-01 3.3366373e-01 -1.0519302e-03 -5.7256600e-04 -2.9959495e-04] [3.3407688e-01 3.3177567e-01 3.3414748e-01 7.0073889e-05 1.8446925e-04 5.0731825e-05] [ 3.3483925e-01 3.3197853e-01 3.3318222e-01 -2.8446535e-04 3.5184901e-05 -2.9261652e-04] [ 3.3513144e-01 3.3175036e-01 3.3311823e-01 7.7914412e-04 -6.4021988e-05 4.6115947e-05]
[Essentials Chapter 14] [command line version]
This program reads in data files with vgp_lon, vgp_lat and optional kappa, N, and site latitude. It allows some filtering based on the requirements of the study, such as:
with correction for within site scatter.
The filtering is just what Pandas was designed for, so we can calls pmag.scalc_vgp_df() which works on a suitably constructed Pandas DataFrame.
help(pmag.scalc_vgp_df)
Help on function scalc_vgp_df in module pmagpy.pmag: scalc_vgp_df(vgp_df, anti=0, rev=0, cutoff=180.0, kappa=0, n=0, spin=0, v=0, boot=0, mm97=0, nb=1000) Calculates Sf for a dataframe with VGP Lat., and optional Fisher's k, site latitude and N information can be used to correct for within site scatter (McElhinny & McFadden, 1997) Parameters _________ df : Pandas Dataframe with columns REQUIRED: vgp_lat : VGP latitude ONLY REQUIRED for MM97 correction: dir_k : Fisher kappa estimate dir_n_samples : number of samples per site lat : latitude of the site mm97 : if True, will do the correction for within site scatter OPTIONAL: boot : if True. do bootstrap nb : number of bootstraps, default is 1000 Returns _____________ N : number of VGPs used in calculation S : S low : 95% confidence lower bound [0 if boot=0] high 95% confidence upper bound [0 if boot=0] cutoff : cutoff used in calculation of S
To just calculate the value of S (without the within site scatter) we read in a data file and attach the correct headers to it depending on what is in it.
vgp_df=pd.read_csv('data_files/scalc/scalc_example.txt',delim_whitespace=True,header=None)
if len(list(vgp_df.columns))==2:
vgp_df.columns=['vgp_lon','vgp_lat']
vgp_df['dir_k'],vgp_df['dir_n'],vgp_df['lat']=0,0,0
else:
vgp_df.columns=['vgp_lon','vgp_lat','dir_k','dir_n_samples','lat']
pmag.scalc_vgp_df
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
100 21.8 180.0
To apply a cutoff for the Fisher k value, we just filter the DataFrame prior to calculating S_b. Let's filter for kappa>50
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,kappa=50)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
73 18.5 180.0
To apply the Vandamme (1994) approach, we set v to True
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,v=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
89 15.2 32.3
To flip the "reverse" directions, we set anti to 1
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
flipping reverse 100 21.1 180.0
And, to do relative to the spin axis, set spin to True:
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
100 21.6 180.0
[Essentials Chapter 14] [command line version]
This program does the same thing as scalc, but reads in a MagIC formatted file. So, we can do that easy-peasy.
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
flipping reverse 21 17.3 180.0
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
flipping reverse 21 16.8 180.0
Like pmag.flip( ), pmag.separate_directions divides a directional data set into two modes. Unlike pmag.flip( ), it returns the two separate modes (e.g., normal and reverse)
help(pmag.separate_directions)
Help on function separate_directions in module pmagpy.pmag: separate_directions(di_block) Separates set of directions into two modes based on principal direction Parameters _______________ di_block : block of nested dec,inc pairs Return mode_1_block,mode_2_block : two lists of nested dec,inc pairs
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,reverse=pmag.separate_directions(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=normal,color='red')
ipmag.plot_di(di_block=reverse,color='b')
[Essentials Chapter 7] [command line version]
This program reads in dec/inc data and "squishes" the inclinations using the formula from King (1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) $\tan(I_o)=flat \tan(I_f)$. [See also unsquish]. We can call pmag.squish() from within the notebook.
help(pmag.squish)
Help on function squish in module pmagpy.pmag: squish(incs, f) returns 'flattened' inclination, assuming factor, f and King (1955) formula: tan (I_o) = f tan (I_f) Parameters __________ incs : array of inclination (I_f) data to flatten f : flattening factor Returns _______ I_o : inclinations after flattening
di_block=np.loadtxt('data_files/squish/squish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=.4
fincs=pmag.squish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Original',color='blue')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Squished',color='red')
[Essentials Chapter 11] [command line version]
This program just calculates the N, mean, sum, sigma and sigma % for data. Obviously, there are numerous ways to do that in Numpy, so let's just use those.
data=np.loadtxt('data_files/gaussian/gauss.out')
print (data.shape[0],data.mean(),data.sum(),data.std())
100 9.949869990000002 994.9869990000001 0.9533644867617789
[Essentials Chapter 15] [MagIC Database] [command line version]
This is program is a dinosaur and can be much more easily done using the wonders of Pandas and matplotlib as demonstrated here.
# read in the data
data=pd.read_csv('data_files/strip_magic/sites.txt',sep='\t',header=1)
# see what's there
data.columns
# you might have to use **df.dropna()** to clean off unwanted NaN lines or other data massaging
# but not for this example
Index(['site', 'location', 'age', 'age_unit', 'dir_dec', 'dir_inc', 'core_depth', 'lat', 'lon', 'geologic_classes', 'geologic_types', 'lithologies', 'citations', 'vgp_lat', 'vgp_lon', 'paleolatitude', 'vgp_lat_rev', 'vgp_lon_rev'], dtype='object')
plt.figure(1,(10,4)) # make the figure
plt.plot(data.age,data.vgp_lat,'b-') # plot as blue line
plt.plot(data.age,data.vgp_lat,'ro',markeredgecolor="black") # plot as red dots with black rims
plt.xlabel('Age (Ma)') # label the time axis
plt.ylabel('VGP Lat.$^{\circ}$')
plt.ylim(-90,90) # set the plot limits
plt.axhline(color='black'); # put on a zero line
[Essentials Chapter 9] [command line version]
Paleomagnetists often use the sun to orient their cores, especially if the sampling site is strongly magnetic and would deflect the magnetic compass. The information required is: where are you (e.g., latitude and longitude), what day is it, what time is it in Greenwhich Mean Time (a.k.a. Universal Time) and where is the sun (e.g., the antipode of the angle the shadow of a gnomon makes with the desired direction)?
This calculation is surprisingly accurate and was implemented in the function pmag.dosundec().
help(pmag.dosundec)
Help on function dosundec in module pmagpy.pmag: dosundec(sundata) returns the declination for a given set of suncompass data Parameters __________ sundata : dictionary with these keys: date: time string with the format 'yyyy:mm:dd:hr:min' delta_u: time to SUBTRACT from local time for Universal time lat: latitude of location (negative for south) lon: longitude of location (negative for west) shadow_angle: shadow angle of the desired direction with respect to the sun. Returns ________ sunaz : the declination of the desired direction wrt true north.
Say you (or your elderly colleague) were located at 35$^{\circ}$ N and 33$^{\circ}$ E. The local time was three hours ahead of Universal Time. The shadow angle for the drilling direction was 68$^{\circ}$ measured at 16:09 on May 23, 1994. pmag.dosundec() requires a dictionary with the necessary information:
sundata={'delta_u':3,'lat':35,'lon':33,\
'date':'1994:05:23:16:9','shadow_angle':68}
print ('%7.1f'%(pmag.dosundec(sundata)))
154.2
[Essentials Chapter 16] [command line version]
Sometimes it is useful to generate a distribution of synthetic geomagnetic field vectors that you might expect to find from paleosecular variation of the geomagnetic field. The program tk03 generates distributions of field vectors from the PSV model of Tauxe and Kent (2004, doi: 10.1029/145GM08). This program was implemented for notebook use as ipmag.tk03(). [See also find_ei].
help(ipmag.tk03)
Help on function tk03 in module pmagpy.ipmag: tk03(n=100, dec=0, lat=0, rev='no', G2=0, G3=0) Generates vectors drawn from the TK03.gad model of secular variation (Tauxe and Kent, 2004) at given latitude and rotated about a vertical axis by the given declination. Return a nested list of of [dec,inc,intensity]. Parameters ---------- n : number of vectors to determine (default is 100) dec : mean declination of data set (default is 0) lat : latitude at which secular variation is simulated (default is 0) rev : if reversals are to be included this should be 'yes' (default is 'no') G2 : specify average g_2^0 fraction (default is 0) G3 : specify average g_3^0 fraction (default is 0) Returns ---------- tk_03_output : a nested list of declination, inclination, and intensity (in nT) Examples -------- >>> ipmag.tk03(n=5, dec=0, lat=0) [[14.752502674158681, -36.189370642603834, 16584.848620957589], [9.2859465437113311, -10.064247301056071, 17383.950391596223], [2.4278460589582913, 4.8079990844938019, 18243.679003572055], [352.93759572283585, 0.086693343935840397, 18524.551174838372], [352.48366219759953, 11.579098286352332, 24928.412830772766]]
di_block=ipmag.tk03(lat=30)
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
[Essentials Chapter 10] [MagIC Database] [command line version]
thellier_magic makes plots for Thellier-Thellier (Thellier E and Thellier O, 1959, Annales de Geophysique 15: 285–378) type experimental data.
It reads in MagIC formatted data, sorts the data into datablocks for plotting as Arai (Nagata et al., 1963, doi: 10.1029/JZ068i018p05277) or Zijderveld (Zijderveld, J. D. A. (1967). A.C. demagnetization of rocks: analysis of results. In D. Collinson, K. Creer, & S. Runcorn (Eds.), Methods in Paleomagnetism (pp. 254–286). Amsterdam: Elsevier) as well as equal area projections and de (re) magnetization plots.
For full functionality, you should use the Thellier GUI program (in pmag_gui.py from the command line), but within a notebook you can take a quick look using ipmag.thellier_magic().
Here we will look at some data from Shaar et al. (2011, doi: 10.1016/j.epsl.2010.11.013).
# plot the first five specimens
ipmag.thellier_magic(input_dir_path='data_files/thellier_magic/',
n_specs=5, save_plots=False, fmt="png") # s2s0-05
-W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data -I- Make sure you've provided the correct file name -W- Couldn't read in specimens data for data propagation s2s0-01 s2s0-02 s2s0-03 s2s0-04 s2s0-05
(True, [])
It is at times handy to be able to generate a uniformly distributed set of directions (or geographic locations). This is done using a technique described by Fisher et al. (Fisher, N. I., Lewis, T., & Embleton, B. J. J. (1987). Statistical Analysis of Spherical Data. Cambridge: Cambridge University Press). uniform does that by calling pmag.get_unf().
help(pmag.get_unf)
Help on function get_unf in module pmagpy.pmag: get_unf(N=100) Generates N uniformly distributed directions using the way described in Fisher et al. (1987). Parameters __________ N : number of directions, default is 100 Returns ______ array of nested dec,inc pairs
di_block=pmag.get_unf()
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
[Essentials Chapter 7] [Essentials Chapter 16] [command line version]
This program is just the inverse of squish in that it takes "squished" data and "unsquishes" them, assuming a King (1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) relationship: $\tan(I_o)=flat \tan(I_f)$. So, $\tan(I_f) = \tan(I_o)/flat$.
It calls pmag.unquish().
help(pmag.unsquish)
Help on function unsquish in module pmagpy.pmag: unsquish(incs, f) returns 'unflattened' inclination, assuming factor, f and King (1955) formula: tan (I_o) = tan (I_f)/f Parameters __________ incs : array of inclination (I_f) data to unflatten f : flattening factor Returns _______ I_o : inclinations after unflattening
di_block=np.loadtxt('data_files/unsquish/unsquish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=.4
fincs=pmag.unsquish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Squished',color='red')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Unsquished',color='blue')
[Essentials Chapter 2] [command line version]
vdm_b is the inverse of b_vdm in that it converts a Virtual [Axial] Dipole Moment (vdm or vadm) to a predicted geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude. This program calls pmag.vdm_b().
help(pmag.vdm_b)
Help on function vdm_b in module pmagpy.pmag: vdm_b(vdm, lat) Converts a virtual dipole moment (VDM) or a virtual axial dipole moment (VADM; input in units of Am^2) to a local magnetic field value (output in units of tesla) Parameters ---------- vdm : V(A)DM in units of Am^2 lat: latitude of site in degrees Returns ------- B: local magnetic field strength in tesla
print ('%7.1f microtesla'%(pmag.vdm_b(7.159e22,22)*1e6))
33.0 microtesla
[Essentials Chapter 2] [command line version]
vector_mean calculates the vector mean for a set of vectors in polar coordinates (e.g., declination, inclination, intensity). This is similar to the Fisher mean (gofish) but uses vector length instead of unit vectors. It calls calls pmag.vector_mean().
help(pmag.vector_mean)
Help on function vector_mean in module pmagpy.pmag: vector_mean(data) calculates the vector mean of a given set of vectors Parameters __________ data : nested array of [dec,inc,intensity] Returns _______ dir : array of [dec, inc, 1] R : resultant vector length
data=np.loadtxt('data_files/vector_mean/vector_mean_example.dat')
Dir,R=pmag.vector_mean(data)
print (('%i %7.1f %7.1f %f')%(data.shape[0],Dir[0],Dir[1],R))
100 1.3 49.6 2289431.981383
[Essentials Chapter 2] [command line version]
We use vgp_di to convert Virtual Geomagnetic Pole positions to predicted directions. [See also di_vgp].
This program uses the function pmag.vgp_di().
help(pmag.vgp_di)
Help on function vgp_di in module pmagpy.pmag: vgp_di(plat, plong, slat, slong) Converts a pole position (pole latitude, pole longitude) to a direction (declination, inclination) at a given location (slat, slong) assuming a dipolar field. Parameters ---------- plat : latitude of pole (vgp latitude) plong : longitude of pole (vgp longitude) slat : latitude of site slong : longitude of site Returns ---------- dec,inc : tuple of declination and inclination
d,i=pmag.vgp_di(68,191,33,243)
print ('%7.1f %7.1f'%(d,i))
335.6 62.9
[Essentials Chapter 2] [MagIC Database] [command line version]
Plotting distributions of Virtual Geomagnetic Poles on many desired map projections is a frequent need in paleomagnetism. vgpmap_magic reads in MagIC formatted files and has a number of plotting options. It has been implemented into the ipmag module by Nick Swanson-Hysell as ipmag.plot_vgp().
We cam use ipmag.plot_vgp() after reading in a MagIC formatted sites.txt file.
NB: you could also use pmagplotlib.plot_map() (see plot_map_pts) if more options are desired.
help(ipmag.plot_vgp)
Help on function plot_vgp in module pmagpy.ipmag: plot_vgp(map_axis, vgp_lon=None, vgp_lat=None, di_block=None, label='', color='k', marker='o', edge='black', markersize=20, legend=False) This function plots a paleomagnetic pole position on a cartopy map axis. Before this function is called, a plot needs to be initialized with code such as that in the make_orthographic_map function. Example ------- >>> vgps = ipmag.fishrot(dec=200,inc=30) >>> vgp_lon_list,vgp_lat_list,intensities= ipmag.unpack_di_block(vgps) >>> map_axis = ipmag.make_orthographic_map(central_longitude=200,central_latitude=30) >>> ipmag.plot_vgp(map_axis,vgp_lon=vgp_lon_list,vgp_lat=vgp_lat_list,color='red',markersize=40) Required Parameters ----------- map_axis : the name of the current map axis that has been developed using cartopy plon : the longitude of the paleomagnetic pole being plotted (in degrees E) plat : the latitude of the paleomagnetic pole being plotted (in degrees) Optional Parameters (defaults are used if not specified) ----------- color : the color desired for the symbol (default is 'k' aka black) marker : the marker shape desired for the pole mean symbol (default is 'o' aka a circle) edge : the color of the edge of the marker (default is black) markersize : size of the marker in pt (default is 20) label : the default is no label. Labels can be assigned. legend : the default is no legend (False). Putting True will plot a legend.
data=pd.read_csv('data_files/vgpmap_magic/sites.txt',sep='\t',header=1)
data.columns
Index(['age', 'age_sigma', 'age_unit', 'citations', 'conglomerate_test', 'contact_test', 'criteria', 'description', 'dir_alpha95', 'dir_dec', 'dir_inc', 'dir_k', 'dir_n_samples', 'dir_nrm_origin', 'dir_polarity', 'dir_r', 'dir_tilt_correction', 'int_abs', 'int_abs_sigma', 'int_n_samples', 'lat', 'location', 'lon', 'method_codes', 'result_type', 'site', 'specimens', 'vadm', 'vadm_sigma', 'vgp_dm', 'vgp_dp', 'vgp_lat', 'vgp_lon', 'vgp_n_samples'], dtype='object')
help(ipmag.make_orthographic_map)
Help on function make_orthographic_map in module pmagpy.ipmag: make_orthographic_map(central_longitude=0, central_latitude=0, figsize=(8, 8), add_land=True, land_color='tan', add_ocean=False, ocean_color='lightblue', grid_lines=True, lat_grid=[-80.0, -60.0, -30.0, 0.0, 30.0, 60.0, 80.0], lon_grid=[-180.0, -150.0, -120.0, -90.0, -60.0, -30.0, 0.0, 30.0, 60.0, 90.0, 120.0, 150.0, 180.0]) Function creates and returns an orthographic map projection using cartopy Example ------- >>> map_axis = make_orthographic_map(central_longitude=200,central_latitude=30) Optional Parameters ----------- central_longitude : central longitude of projection (default is 0) central_latitude : central latitude of projection (default is 0) figsize : size of the figure (default is 8x8) add_land : chose whether land is plotted on map (default is true) land_color : specify land color (default is 'tan') add_ocean : chose whether land is plotted on map (default is False, change to True to plot) ocean_color : specify ocean color (default is 'lightblue') grid_lines : chose whether gird lines are plotted on map (default is true) lat_grid : specify the latitude grid (default is 30 degree spacing) lon_grid : specify the longitude grid (default is 30 degree spacing)
lats=data['vgp_lat'].values
lons=data['vgp_lon'].values
if has_cartopy:
map_axis =ipmag.make_orthographic_map(central_latitude=60,figsize=(6,6),land_color='bisque',\
add_ocean=True,ocean_color='azure')
ipmag.plot_vgp(map_axis, vgp_lon=lons, vgp_lat=lats,\
markersize=50, legend='no',color='red')
elif has_basemap:
m = Basemap(projection='ortho',lat_0=60,lon_0=0)
plt.figure(figsize=(6, 6))
m.drawcoastlines(linewidth=0.25)
m.fillcontinents(color='bisque',lake_color='azure',zorder=1)
m.drawmapboundary(fill_color='azure')
m.drawmeridians(np.arange(0,360,30))
m.drawparallels(np.arange(-90,90,30))
ipmag.plot_vgp_basemap(m, vgp_lon=lons, vgp_lat=lats, color='r', marker='o', \
markersize=50, legend='no')
Or, you can run ipmag.vgpmap_magic:
help(ipmag.vgpmap_magic)
Help on function vgpmap_magic in module pmagpy.ipmag: vgpmap_magic(dir_path='.', results_file='sites.txt', crd='', sym='ro', size=8, rsym='g^', rsize=8, fmt='pdf', res='c', proj='ortho', flip=False, anti=False, fancy=False, ell=False, ages=False, lat_0=0, lon_0=0, save_plots=True, interactive=False, contribution=None) makes a map of vgps and a95/dp,dm for site means in a sites table Parameters ---------- dir_path : str, default "." input directory path results_file : str, default "sites.txt" name of MagIC format sites file crd : str, default "" coordinate system [g, t] (geographic, tilt_corrected) sym : str, default "ro" symbol color and shape, default red circles (see matplotlib documentation for more color/shape options) size : int, default 8 symbol size rsym : str, default "g^" symbol for plotting reverse poles (see matplotlib documentation for more color/shape options) rsize : int, default 8 symbol size for reverse poles fmt : str, default "pdf" format for figures, ["svg", "jpg", "pdf", "png"] res : str, default "c" resolution [c, l, i, h] (crude, low, intermediate, high) proj : str, default "ortho" ortho = orthographic lcc = lambert conformal moll = molweide merc = mercator flip : bool, default False if True, flip reverse poles to normal antipode anti : bool, default False if True, plot antipodes for each pole fancy : bool, default False if True, plot topography (not yet implementedj) ell : bool, default False if True, plot ellipses ages : bool, default False if True, plot ages lat_0 : float, default 0. eyeball latitude lon_0 : float, default 0. eyeball longitude save_plots : bool, default True if True, create and save all requested plots interactive : bool, default False if True, interactively plot and display (this is best used on the command line only) Returns --------- (status, output_files) - Tuple : (True or False indicating if conversion was sucessful, file name(s) written)
ipmag.vgpmap_magic('data_files/vgpmap_magic', lat_0=60, save_plots=False)
gridlines only supported for PlateCarree, Lambert Conformal, and Mercator plots currently
[Essentials Chapter 11] [command line version]
There are several different ways of testing whether two sets of directional data share a common mean. One popular (although perhaps not the best) way is to use Watson's F test (Watson, 1956, doi: 10.1111/j.1365-246X.1956.tb05560.x). [See also watsons_v or Lisa Tauxe's bootstrap way: common_mean].
If you still want to use Waston's F, then try pmag.watsons_f() for this.
help(pmag.watsons_f)
Help on function watsons_f in module pmagpy.pmag: watsons_f(DI1, DI2) calculates Watson's F statistic (equation 11.16 in Essentials text book). Parameters _________ DI1 : nested array of [Dec,Inc] pairs DI2 : nested array of [Dec,Inc] pairs Returns _______ F : Watson's F Fcrit : critical value from F table
DI1=np.loadtxt('data_files/watsons_f/watsons_f_example_file1.dat')
DI2=np.loadtxt('data_files/watsons_f/watsons_f_example_file2.dat')
F,Fcrit=pmag.watsons_f(DI1,DI2)
print ('%7.2f %7.2f'%(F,Fcrit))
5.23 3.26
[Essentials Chapter 11] [command line version]
Watson (1983, doi: 10.1016/0378-3758(83)90043-5) proposed a clever Monte Carlo type test for a common mean direction for two data sets. This was implemented as ipmag.common_mean_watson().
help(ipmag.common_mean_watson)
Help on function common_mean_watson in module pmagpy.ipmag: common_mean_watson(Data1, Data2, NumSims=5000, print_result=True, plot='no', save=False, save_folder='.', fmt='svg') Conduct a Watson V test for a common mean on two directional data sets. This function calculates Watson's V statistic from input files through Monte Carlo simulation in order to test whether two populations of directional data could have been drawn from a common mean. The critical angle between the two sample mean directions and the corresponding McFadden and McElhinny (1990) classification is printed. Parameters ---------- Data1 : a nested list of directional data [dec,inc] (a di_block) Data2 : a nested list of directional data [dec,inc] (a di_block) NumSims : number of Monte Carlo simulations (default is 5000) print_result : default is to print the test result (True) plot : the default is no plot ('no'). Putting 'yes' will the plot the CDF from the Monte Carlo simulations. save : optional save of plots (default is False) save_folder : path to where plots will be saved (default is current) fmt : format of figures to be saved (default is 'svg') Returns ------- printed text : text describing the test result is printed result : a boolean where 0 is fail and 1 is pass angle : angle between the Fisher means of the two data sets critical_angle : critical angle for the test to pass Examples -------- Develop two populations of directions using ``ipmag.fishrot``. Use the function to determine if they share a common mean. >>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60) >>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57) >>> ipmag.common_mean_watson(directions_A, directions_B)
# use the same data as for watsons_f
DI1=np.loadtxt('data_files/watsons_f/watsons_f_example_file1.dat')
DI2=np.loadtxt('data_files/watsons_f/watsons_f_example_file2.dat')
plt.figure(1,(5,5))
ipmag.common_mean_watson(DI1,DI2,plot='yes')
Results of Watson V test: Watson's V: 10.5 Critical value of V: 6.6 "Fail": Since V is greater than Vcrit, the two means can be distinguished at the 95% confidence level. M&M1990 classification: Angle between data set means: 21.5 Critical angle for M&M1990: 17.0
<Figure size 252x180 with 0 Axes>
(0, 21.534333502358034, 17.044646621085285)
[Essentials Chapter 9] [command line version]
zeq is a quick and dirty plotter for Zijderveld (Zijderveld, J. D. A. (1967). A.C. demagnetization of rocks: analysis of results. In D. Collinson, K. Creer, & S. Runcorn (Eds.), Methods in Paleomagnetism (pp. 254–286). Amsterdam: Elsevier) diagrams. It calls pmagplotlib.plot_zed() to do the plotting.
This example plots the data in specimen coordinates = if other coordinate systems are desired, perform di_geo and di_tilt steps first.
help(pmagplotlib.plot_zed)
Help on function plot_zed in module pmagpy.pmagplotlib: plot_zed(ZED, datablock, angle, s, units) function to make equal area plot and zijderveld plot Parameters _________ ZED : dictionary with keys for plots eqarea : figure number for equal area projection zijd : figure number for zijderveld plot demag : figure number for magnetization against demag step datablock : nested list of [step, dec, inc, M (Am2), quality] step : units assumed in SI M : units assumed Am2 quality : [g,b], good or bad measurement; if bad will be marked as such angle : angle for X axis in horizontal plane, if 0, x will be 0 declination s : specimen name units : SI units ['K','T','U'] for kelvin, tesla or undefined Effects _______ calls plotting functions for equal area, zijderveld and demag figures
# we can make the figure dictionary that pmagplotlib likes:
ZED={'eqarea':1,'zijd':2, 'demag':3}# make datablock
# read in data
data=pd.read_csv('data_files/zeq/zeq_example.dat',delim_whitespace=True,header=None)
data.columns=['specimen','step','m (emu)','dec','inc']
data['m SI']=data['m (emu)']*1e-3 # convert to SI units from lab (emu) units
data['quality']='g' # add in default "good" quality designation
data['step SI']=data['step']*1e-3 # convert to tesla
data['blank']="" # this is a dummy variable expected by plotZED
specimens=data.specimen.unique()
angle=0
units='T' # these are AF data
cnt=1
for s in specimens:
# we can make the figure dictionary that pmagplotlib likes:
ZED={'eqarea':cnt,'zijd':cnt+1, 'demag':cnt+2}# make datablock
cnt+=3
spec_df=data[data.specimen==s]
datablock=spec_df[['step SI','dec','inc','m SI','blank','quality']].values.tolist()
pmagplotlib.plot_zed(ZED,datablock,angle,s,units)
[Essentials Chapter 9] [MagIC Database] [command line version]
This program is the same as zeq but for MagIC formatted input files. This example plots the data in specimen coordinates = if other coordinate systems are desired, perform di_geo and di_tilt steps first.
# read in MagIC foramatted data
dir_path='data_files/zeq_magic/'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)
-I- Using cached data model -I- Couldn't connect to earthref.org, using cached method codes -I- Using cached method codes -I- Using cached vocabularies -I- Using cached suggested vocabularies
(True, [])
Image('data_files/Figures/chartmaker.png')
You can print it out and tape it to the oven in the lab to help keep track of this annoyingly complicated experiment. :)
To make this from within a notebook, call pmag.chart_maker().
help(pmag.chart_maker)
Help on function chart_maker in module pmagpy.pmag: chart_maker(Int, Top, start=100, outfile='chart.txt') Makes a chart for performing IZZI experiments. Print out the file and tape it to the oven. This chart will help keep track of the different steps. Z : performed in zero field - enter the temperature XXX.0 in the sio formatted measurement file created by the LabView program I : performed in the lab field written at the top of the form P : a pTRM step - performed at the temperature and in the lab field. Parameters __________ Int : list of intervals [e.g., 50,10,5] Top : list of upper bounds for each interval [e.g., 500, 550, 600] start : first temperature step, default is 100 outfile : name of output file, default is 'chart.txt' Output _________ creates a file with: file: write down the name of the measurement file field: write down the lab field for the infield steps (in uT) the type of step (Z: zerofield, I: infield, P: pTRM step temperature of the step and code for SIO-like treatment steps XXX.0 [zero field] XXX.1 [in field] XXX.2 [pTRM check] - done in a lab field date : date the step was performed run # : an optional run number zones I-III : field in the zones in the oven start : time the run was started sp : time the setpoint was reached cool : time cooling started
To perform 50 degree intervals from 100 to 500, followed by 10 degree intervals from 500 to 600 set up the Int and Top lists like this:
Int=[50,10]
Top=[500,600]
pmag.chart_maker(Int,Top)
output stored in: chart.txt
You can now print out chart.txt. Happy IZZI-ing.
import glob
# remove some individual files
filenames = ['chart.txt',
'data_files/azdip_magic/samples.txt', 'data_files/download_magic/criteria.txt',
'data_files/orientation_magic/samples.txt', 'data_files/orientation_magic/sites.txt',
'data_files/download_magic/ages.txt', 'data_files/download_magic/contribution.txt',
'data_files/download_magic/measurements.txt', 'data_files/download_magic/samples.txt',
'data_files/download_magic/specimens.txt', 'data_files/download_magic/locations.txt']
for fname in filenames:
try:
os.remove(fname)
except FileNotFoundError:
pass
# remove all MagIC-generated files from a given directory
def remove_magic_files(directory):
magic_files = ['specimens.txt', 'samples.txt', 'sites.txt', 'locations.txt', 'measurements.txt',
'contribution.txt', 'ages.txt']
dir_files = os.listdir(directory)
for dtype in magic_files:
try:
os.remove(dtype)
except FileNotFoundError:
pass
for fname in dir_files:
if fname.endswith(dtype):
try:
os.remove(os.path.join(directory, fname))
except FileNotFoundError:
pass
for full_fname in glob.glob(os.path.join(directory, '*.magic')):
os.remove(full_fname)
# not convert_2_magic/jr6_magic
for directory in ['.', 'data_files/convert_2_magic/2g_bin_magic/mn1', 'data_files/convert_2_magic/pmd_magic/PMD/',
'data_files', 'data_files/k15_s', 'data_files/convert_2_magic/agm_magic',
'data_files/convert_2_magic/huji_magic', 'data_files/convert_2_magic/bgc_magic',
'data_files/convert_2_magic/kly4s_magic', 'data_files/convert_2_magic/mst_magic',
'data_files/convert_ages', 'data_files/convert_2_magic/cit_magic/MIT/7325B',
'data_files/convert_2_magic/cit_magic/USGS/bl9-1', 'data_files/convert_2_magic/tdt_magic',
'data_files/convert_2_magic/ldeo_magic', 'data_files/convert_2_magic/k15_magic',
'data_files/convert_2_magic/generic_magic']:
remove_magic_files(directory)
lst = ['*.png', './data_files/convert_2_magic/jr6_magic/SML*.txt', './data_files/download_magic/Snake*',
'./data_files/convert_2_magic/jr6_magic/AP12_*.txt',
'./data_files/convert_2_magic/jr6_magic/*_measurements.txt', './data_files/convert_2_magic/jr6_magic/*.magic',
'./data_files/3_0/McMurdo/*.tex', './data_files/3_0/McMurdo/*.xls', './data_files/3_0/Megiddo/*.tex',
'data_files/3_0/Megiddo/*.xls']
for directory in lst:
for fname in glob.glob(directory):
os.remove(fname)