# PmagPy¶

by Lisa Tauxe, Lori Jonestrask, Nick Swanson-Hysell and Nick Jarboe

This notebook demonstrates the use of PmagPy functions from within a Jupyter notebook. More information is available in the PmagPy cookbook http://earthref.org/PmagPy/cookbook. For examples of how to use PmagPy scripts on the command line, see the PmagPy-cli notebook.

### Importing packages and modules into the notebook¶

• To use the functions in this notebook, we have to import the pmagpy modules, pmagplotlib, pmag and ipmag and some other handy functions for use in the notebook.
• Run the code block below (click on the cell and then click 'Run', please be patient!):
In [1]:
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import pmagpy.contribution_builder as cb
from pmagpy import convert_2_magic as convert
import matplotlib.pyplot as plt # our plotting buddy
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
# test if Basemap and/or cartopy is installed
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
# test if xlwt is installed (allows you to export to excel)
try:
import xlwt
has_xlwt = True
except ImportError:
has_xlwt = False
# This allows you to make matplotlib plots inside the notebook.
%matplotlib inline
from IPython.display import Image
import os

print('All modules imported!')

All modules imported!

• Now you have everything you need to run PmagPy!

## Guide to PmagPy¶

• the functions in this notebook are listed alphabetically so here is a handy guide by function:
• Calculations:

• angle : calculates the angle between two vectors
• apwp : returns predicted paleolatitudes, directions and pole latitude/longitude from apparent polar wander paths of Besse and Courtillot (2002).
• b_vdm : converts B (in microT) and (magnetic) latitude to V(A)DM (see vdm_b)
• bootams : calculates bootstrap statistics for tensor data
• cart_dir : converts cartesian coordinates (x,y,z) to declination, inclination, intensity (see dir_cart)
• di_eq : maps declination, inclinatitions to X,Y for plotting in equal area projections
• di_geo : rotates declination, inclination in specimen coordinates to geographic coordinates
• di_rot : rotates directions to a coordinate system with D,I as center
• di_tilt : rotates directions to stratigraphic coordinates
• di_vgp : converts direction to Virtual Geomagnetic Pole (see vgp_di)
• dia_vgp : converts direction and $\alpha_{95}$ to Virtual Geomagnetic Pole and dp,dm
• dipole_pinc : calculates inclination given latitude assuming geocentric axial dipole
• dipole_plat : calculates latitude given inclination assuming geocentric axial dipole
• dir_cart : converts declination, inclination, intensity to cartesian coordinates (see cart_dir)
• eigs_s : converts eigenparameters to equivalent 6 element tensor (see s_eigs)
• eq_di : takes X,Y from equal area projection (e.g., from digitized coordinates) and converts to declination, inclination
• fcalc : returns the value from an F table, given the degrees of freedom.
• fisher : generates sets of directions drawn from Fisher distributions with vertical true mean
• fishrot : generates sets of directions drawn from Fisher distributions with arbitrary true mean
• flip : flips a second mode (reverse directions) to their antipodes
• gaussian : generates data drawn from a normal distribution
• gobing : calculates Bingham statistics from a set of directions
• gofish : calculates Fisher statistics from a set of directions
• gokent : calculates Kent statistics from a set of directions
• goprinc : calculates principal directions statistics
• igrf : calculates geomagnetic field vectors for location, age given a field model (e.g., IGRF)
• incfish : estimates the true mean inclination from inclination only data
• pca : calculates the best-fit line or plane for demagnetization data and associated statistics
• pt_rot : rotates point given finite rotation pole
• scalc : calculates VGP scatter
• s_eigs : takes a 6 element tensor and calculates eigen parameters (see eigs_s)
• s_geo : rotates 6 element tensors to geographic coordinates
• s_hext : calculates Hext statistics from 6 element tensors
• s_tilt : rotates 6 element tensors to stratigraphic coordinates
• separate_directions : separates a set of directions into two modes (normal and reverse)
• squish: flattens inclination data given flattening factor (see unsquish)
• sundec : calulates direction to sun for location, date, time and sun azimuth
• tk03 : generates sets of directions consistent with the TK03 field model
• uniform : generates sets of uniformly distributed directions
• unsquish : unsquishes flattened inclinations, given flattening factor (see squish)
• vector_mean : calculates vector mean for sets of vectors (declination, inclination, intensity)
• vdm_b : calculates intensity at given location from specified virtual dipole moment (see b_vdm)
• vgp_di : calculates direction at given location from virtual geomagnetic pole (see di_vgp)
• watsons_f : calculates Watson's F statistic for testing for common mean
• Plots:

• ani_depthplot : plots anisotropy data against depth in stratigraphic section (Xmas tree plots)
• aniso_magic : makes plots of anisotropy data and bootstrapped confidences
• biplot_magic : plots different columns against each other in MagIC formatted data files
• chi_magic : plots magnetic susceptibility data in MagIC format as function of field, frequency or temperature
• common_mean : graphical approach to testing two sets of directions for common mean using bootstrap
• core_depthplot : plots MagIC formatted data
• curie : makes plots of Curie Temperature data and provides estimates for Tc
• dayplot_magic : makes Day et al. (1977) and other plots with hysteresis statistics
• dmag_magic : plots remanence against demagnetization step for MagIC formatted files
• eqarea and eqarea_magic : makes equal area projections for directions
• eqarea_ell : makes equal area projections for directions with specified confidence ellipses
• find_ei : finds the inclination unflattening factor that unsquishes directions to match TK03 distribution
• fishqq: makes a Quantile-Quantile plot for directions against uniform and exponential distributions
• foldtest & foldtest_magic : finds tilt correction that maximizes concentration of directions, with bootstrap confidence bounds.
• forc_diagram: plots FORC diagrams for both conventional and irregular FORCs
• hysteresis_magic : makes plots of hysteresis data (not FORCs).
• irm_unmix : analyzes IRM acquisition data in terms of coercivity distributions
• irmaq_magic : plots IRM acquistion data
• lnp_magic : plots lines and planes for site level data and calculates best fit mean and alpha_95
• lowes : makes a plot of the Lowe's spectrum for a geomagnetic field model
• lowrie and lowrie_magic : makes plots of Lowrie's (1990) 3D-IRM demagnetization experiments
• plot_cdf and plot_2cdfs : makes a cumulative distribution plot of data
• plotdi_a : makes equal are plots of directions and their $\alpha_{95}$s
• plot_geomagia : makes plots from files downloaded from the geomagia website
• plot_magic_keys : plots data from MagIC formatted data files
• plot_ts : makes a plot of the desired Geomagnetic Reversal time scale
• qqplot : makes a Quantile-Quantile plot for data against a normal distribution
• qqunf : makes a Quantile-Quantile plot for data against a uniform distribution
• quick_hyst : makes hysteresis plots
• revtest & revtest_magic : performs a bootstrap reversals test
• thellier_magic : makes plots of thellier-thellier data.
• watsons_v : makes a graph for Watson's V test for common mean
• zeq and zeq_magic : makes quicky zijderveld plots for measurement data
• Maps:

• cont_rot : makes plots of continents after rotation to specified coordinate system
• plot_mag_map : makes a color contour plot of geomagnetic field models
• plot_map_pts : plots points on maps
• polemap_magic : reads in MagIC formatted file with paleomagnetic poles and plots them
• vgpmap_magic : reads in MagIC formatted file with virtual geomagnetic poles and plots them
• Working with MagIC:

• writing MagIC files : outputing MagIC formatted files
• combine_magic : combines two MagIC formatted files of same type
• grab_magic_key : prints out a single column from a MagIC format file
• magic_select : selects data from MagIC format file given conditions (e.g., method_codes contain string)
• sites_extract : makes excel or latex files from sites.txt for publications
• criteria_extract : makes excel or latex files from criteria.txt for publications
• specimens_extract : makes excel or latex files from specimens.txt for publications

• contributions work with data model 3.0 MagIC contributions

• cb.add_sites_to_meas_table : completes a measurements data frame with the information required for plotting by site.
• cb.get_intensity_col : finds the first non-zero type of intensity data in a measurements dataframe.
• conversion scripts : convert many laboratory measurement formats to the MagIC data model 3 format

• _2g_asc_magic : converts 2G ascii files to MagIC
• _2g_bin_magic : converts 2G binary files to MagIC
• aarm_magic : takes a MagIC formated measurements.txt file with anisotropy of ARM data and calculates the tensors and stores in a MagIC formatted specimens.txt file.
• atrm_magic : takes a MagIC formated measurements.txt file with anisotropy of TRM data and calculates the tensors and stores in a MagIC formatted specimens.txt file.
• agm_magic : converts Princeton Measurements alternating gradient force magnetization (AGM) files to MagIC.
• bgc_magic : convert Berkeley Geochronology Center files to MagIC.
• cit_magic : convert Cal Tech format files to MagIC.
• generic_magic : converts generic files to MagIC.
• huji_magic : converts Hebrew University, Jerusalem, Israel files to MagIC.
• huji_sample_magic : converts HUJI files to a MagIC format.
• jr6_jr6_magic : converts the AGICO JR6 spinner .jr6 files to MagIC
• jr6_txt_magic : converts the AGICO JR6 .txt files to MagIC
• k15_magic : converts 15 measurement anisotropy of magnetic susceptibility files to MagIC.
• kly4s_magic : converts SIO KLY4S formatted files to MagIC.
• ldeo_magic : converts Lamont-Doherty files to MagIC.
• livdb_magic : converts Liverpool files to MagIC.
• mst_magic : converts Curie Temperature experimental data to MagIC
• s_magic : converts files with 6 tensor elements (S_j) to MagIC format
• sio_magic : converts Scripps Institution of Oceanography data files to MagIC
• sufar4_magic : converts AGICO SUFAR program (ver.1.2.) ascii files to MagIC
• tdt_magic : converts Thellier Tool files to MagIC
• utrecht_magic : converts Fort Hoofddijk, Utrecht University Robot files to MagIC
• orientation_magic : converts an "orient.txt" formatted file with field notebook information into MagIC formatted files
• azdip_magic : converts an "azdip" formatted file to a samples.txt file format
• other handy scripts
• chartmaker : script for making chart to guide IZZI lab experiment

## Figures¶

• The plotting functions make plots to the screen (using the %matplotlib inline magic command), but all matplotlib plots can be saved with the command:

plt.savefig('PATH_TO_FILE_NAME.FMT')

and then viewed in the notebook with:

Image('PATH_TO_FILE_NAME.FMT')

## Working with the MagIC database¶

• The Magnetics Information Consortium (MagIC) maintains a database of published rock and paleomagnetic data: https://www.earthref.org/MagIC
• Many PmagPy scripts are designed to work with data in the MagIC format. This notebook uses Data Model 3.0: https://www.earthref.org/MagIC/data-models/3.0 There are nine basic tables: contribution, locations, sites, samples, specimens, measurements, criteria, ages and images. These are tab delimited data tables with the first line consisting of a delimiter and the table name: (e.g., tab measurements). All of the examples here are tab delimited. The second line are the column names: (e.g., specimen experiment method_codes treat_temp.....). Each subsequent line is a single record.

See the first few lines of this sample file below:

In [2]:
with open('data_files/3_0/McMurdo/samples.txt') as f:
print(line, end="")

tab 	samples
azimuth	azimuth_dec_correction	citations	description	dip	geologic_classes	geologic_types	lat	lithologies	lon	method_codes	orientation_quality	sample	site
260	0	This study	Archived samples from 1965, 66 expeditions.	-57	Extrusive:Igneous	Lava Flow	-77.85	Trachyte	166.64	SO-SIGHT:FS-FD	g	mc01a	mc01


### I/O with MagIC data files¶

• MagIC formatted data files can be imported to a notebook in one of two ways: a

• importing to a Pandas DataFrame using the Pandas pd.read_csv() function
• importing to a list of dictionaries using the pmag.magic_read() function.

In this notebook, we generally read MagIC tables into a Pandas Dataframe with a command like:

meas_df = pd.read_csv('MEASUREMENTS_FILE_PATH',sep='\t',header=1)

These data can then be manipulated with Pandas functions (https://pandas.pydata.org/)

In [3]:
meas_df=pd.read_csv('data_files/3_0/McMurdo/measurements.txt',sep='\t',header=1)

Out[3]:
experiment specimen measurement dir_csd dir_dec dir_inc hyst_charging_mode hyst_loop hyst_sweep_rate treat_ac_field ... timestamp magn_r2_det magn_x_sigma magn_xyz_sigma magn_y_sigma magn_z_sigma susc_chi_mass susc_chi_qdr_mass susc_chi_qdr_volume susc_chi_volume
0 mc01f-LP-DIR-AF mc01f mc01f-LP-DIR-AF1 0.4 171.9 31.8 NaN NaN NaN 0.0000 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 mc01f-LP-DIR-AF mc01f mc01f-LP-DIR-AF2 0.4 172.0 30.1 NaN NaN NaN 0.0050 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 mc01f-LP-DIR-AF mc01f mc01f-LP-DIR-AF3 0.5 172.3 30.4 NaN NaN NaN 0.0075 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 mc01f-LP-DIR-AF mc01f mc01f-LP-DIR-AF4 0.4 172.1 30.4 NaN NaN NaN 0.0100 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 mc01f-LP-DIR-AF mc01f mc01f-LP-DIR-AF5 0.5 171.9 30.8 NaN NaN NaN 0.0125 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 60 columns

Alternatively, the user may wish to use a list of dictionaries compatible with many pmag functions. For that, use the pmag.magic_read() function:

In [4]:
help (pmag.magic_read)

Help on function magic_read in module pmagpy.pmag:

Reads  a Magic template file, returns  data in a list of dictionaries.

Parameters
___________
Required:
infile : the MagIC formatted tab delimited data file
first line contains 'tab' in the first column and the data file type in the second (e.g., measurements, specimen, sample, etc.)
Optional:
Returns
_______
list of dictionaries, file type


In [5]:
meas_dict,file_type=pmag.magic_read('data_files/3_0/McMurdo/measurements.txt')
print (file_type)
print (meas_dict[0])

measurements
{'experiment': 'mc01f-LP-DIR-AF', 'specimen': 'mc01f', 'measurement': 'mc01f-LP-DIR-AF1', 'dir_csd': '0.4', 'dir_dec': '171.9', 'dir_inc': '31.8', 'hyst_charging_mode': '', 'hyst_loop': '', 'hyst_sweep_rate': '', 'treat_ac_field': '0.0', 'treat_ac_field_dc_off': '', 'treat_ac_field_dc_on': '', 'treat_ac_field_decay_rate': '', 'treat_dc_field': '0.0', 'treat_dc_field_ac_off': '', 'treat_dc_field_ac_on': '', 'treat_dc_field_decay_rate': '', 'treat_dc_field_phi': '0.0', 'treat_dc_field_theta': '0.0', 'treat_mw_energy': '', 'treat_mw_integral': '', 'treat_mw_power': '', 'treat_mw_time': '', 'treat_step_num': '1', 'treat_temp': '273', 'treat_temp_dc_off': '', 'treat_temp_dc_on': '', 'treat_temp_decay_rate': '', 'magn_mass': '', 'magn_moment': '2.7699999999999996e-05', 'magn_volume': '', 'citations': 'This study', 'instrument_codes': '', 'method_codes': 'LT-NO:LP-DIR-AF', 'quality': 'g', 'standard': 'u', 'meas_field_ac': '', 'meas_field_dc': '', 'meas_freq': '', 'meas_n_orient': '', 'meas_orient_phi': '', 'meas_orient_theta': '', 'meas_pos_x': '', 'meas_pos_y': '', 'meas_pos_z': '', 'meas_temp': '273', 'meas_temp_change': '', 'analysts': 'Jason Steindorf', 'description': '', 'software_packages': 'pmagpy-1.65b', 'timestamp': '', 'magn_r2_det': '', 'magn_x_sigma': '', 'magn_xyz_sigma': '', 'magn_y_sigma': '', 'magn_z_sigma': '', 'susc_chi_mass': '', 'susc_chi_qdr_mass': '', 'susc_chi_qdr_volume': '', 'susc_chi_volume': ''}


### magic_write¶

To write out a MagIC table from a Pandas DataFrame, first convert it to a list of dictionaries using a command like:

dicts = df.to_dict('records')

then call pmag.magic_write().

From a list of dictionaries, you can just call pmag.magic_write() directly.

### pmag.magic_write¶

In [6]:
help(pmag.magic_write)

Help on function magic_write in module pmagpy.pmag:

magic_write(ofile, Recs, file_type)
Parameters
_________
ofile : path to output file
Recs : list of dictionaries in MagIC format
file_type : MagIC table type (e.g., specimens)

Return :
[True,False] : True if successful
ofile : same as input

Effects :
writes a MagIC formatted file from Recs


In [7]:
meas_dicts = meas_df.to_dict('records')
pmag.magic_write('my_measurements.txt', meas_dicts, 'measurements')

25470  records written to file  my_measurements.txt

Out[7]:
(True, 'my_measurements.txt')

### combine_magic¶

MagIC tables have many columns only some of which are used in a particular instance. So combining files of the same type must be done carefully to ensure that the right data come under the right headings. The program combine_magic can be used to combine any number of MagIC files from a given type.
It reads in MagIC formatted files of a common type (e.g., sites.txt) and combines them into a single file, taking care that all the columns are preserved. For example, if there are both AF and thermal data from a study and we created a measurements.txt formatted file for each, we could use combine_magic.py on the command line to combine them together into a single measurements.txt file. In a notebook, we use ipmag.combine_magic().

In [8]:
help(ipmag.combine_magic)

Help on function combine_magic in module pmagpy.ipmag:

combine_magic(filenames, outfile, data_model=3, magic_table='measurements', output_dir_path='.', input_dir_path='')
Takes a list of magic-formatted files, concatenates them, and creates a
single file. Returns output filename if the operation was successful.

Parameters
-----------
filenames : list of MagIC formatted files
outfile : name of output file
data_model : data model number (2.5 or 3), default 3
magic_table : name of magic table, default 'measurements'

Returns
----------
outfile name if success, False if failure



Here we make a list of names of two MagIC formatted measurements.txt files and use ipmag.combine_magic() to put them together.

In [9]:
filenames=['data_files/combine_magic/af_measurements.txt','../combine_magic/therm_measurements.txt']
outfile='data_files/combine_magic/measurements.txt'
ipmag.combine_magic(filenames,outfile)

-I- Using cached data model
-I- Couldn't connect to earthref.org, using cached method codes
-I- Using cached method codes
-I- Using cached vocabularies
-I- Using cached suggested vocabularies
-I- overwriting /Users/nebula/Python/PmagPy/data_files/combine_magic/measurements.txt
-I- 14 records written to measurements file

Out[9]:
'/Users/nebula/Python/PmagPy/data_files/combine_magic/measurements.txt'

### convert_ages¶

Files downloaded from the MagIC search interface have ages that are in the original units, but what is often desired is for them to be in a single unit. For example, if we searched the MagIC database for all absolute paleointensity data (records with method codes of 'LP-PI-TRM') from the last five million years, the data sets have a variety of age units. We can use pmag.convert_ages() to convert them all to millions of years.

In [10]:
ipmag.download_magic('magic_downloaded_rows.txt',dir_path='data_files/convert_ages/',
input_dir_path='data_files/convert_ages/')

working on:  'contribution'
1  records written to file  /Users/nebula/Python/PmagPy/data_files/convert_ages/contribution.txt
contribution  data put in  /Users/nebula/Python/PmagPy/data_files/convert_ages/contribution.txt
working on:  'sites'
14317  records written to file  /Users/nebula/Python/PmagPy/data_files/convert_ages/sites.txt
sites  data put in  /Users/nebula/Python/PmagPy/data_files/convert_ages/sites.txt

Out[10]:
True

After some minimal filtering using Pandas, we can convert a DataFrame to a list of dictionaries required by most PmagPy functions and use pmag.convert_ages() to convert all the ages. The converted list of dictionaries can then be turned back into a Pandas DataFrame and either plotted or filtered further as desired.

In this example, we filter for data older than the Brunhes (0.78 Ma) and younger than 5 Ma, then plot them against latitude. We can also use vdm_b to plot the intensities expected from the present dipole moment (~80 ZAm$^2$).

In [11]:
help(pmag.convert_ages)

Help on function convert_ages in module pmagpy.pmag:

convert_ages(Recs, data_model=3)
converts ages to Ma
Parameters
_________
Recs : list of dictionaries in data model by data_model
data_model : MagIC data model (default is 3)


In [12]:
# read in the sites.txt file as a dataframe
# get rid aof any records without intensity data or latitude
site_df=site_df.dropna(subset=['int_abs','lat'])
# Pick out the sites with 'age' filled in
site_df_age=site_df.dropna(subset=['age'])
# pick out those with age_low and age_high filled in
site_df_lowhigh=site_df.dropna(subset=['age_low','age_high'])
# concatenate the two
site_all_ages=pd.concat([site_df_age,site_df_lowhigh])
# get rid of duplicates (records with age, age_high AND age_low)
site_all_ages.drop_duplicates(inplace=True)
# Pandas reads in blanks as NaN, which pmag.convert_ages hates
# this replaces all the NaNs with blanks
site_all_ages.fillna('',inplace=True)
# converts to a list of dictionaries
sites=site_all_ages.to_dict('records')
# converts the ages to Ma
converted_df=pmag.convert_ages(sites)
# turn it back into a DataFrame
site_ages=pd.DataFrame(converted_df)
# filter away
site_ages=site_ages[site_ages.age.astype(float) <= 5]
site_ages=site_ages[site_ages.age.astype(float) >=0.05]


Let's plot them up and see what we get.

In [13]:
plt.plot(site_ages.lat,site_ages.int_abs*1e6,'bo')

# put on the expected values for the present dipole moment (~80 ZAm^2)

lats=np.arange(-80,70,1)
vdms=80e21*np.ones(len(lats))
bs=pmag.vdm_b(vdms,lats)*1e6
plt.plot(lats,bs,'r-')
plt.xlabel('Latitude')
plt.ylabel('Intensity ($\mu$T)')

Out[13]:
Text(0,0.5,'Intensity ($\\mu$T)')

That is pretty awful agreement. Someday we need to figure out what is wrong with the data or our GAD hypothesis.

### grab_magic_key¶

Sometimes you want to read in a MagIC file and print out the desired key. Pandas makes this easy! In this example, we will print out latitudes for each site record.

In [14]:
sites=pd.read_csv('data_files/download_magic/sites.txt',sep='\t',header=1)
print (sites.lat)

0     42.60264
1     42.60264
2     42.60260
3     42.60352
4     42.60350
5     42.60104
6     42.60100
7     42.73656
8     42.73660
9     42.84180
10    42.84180
11    42.86570
12    42.86570
13    42.92031
14    42.92030
15    42.56857
16    42.49964
17    42.49962
18    42.49960
19    42.50001
20    42.50000
21    42.52872
22    42.52870
23    42.45559
24    42.45560
25    42.48923
26    42.48920
27    42.46186
28    42.46190
29    42.69156
30    42.65289
31    42.65290
32    43.30504
33    43.30500
34    43.36817
35    43.36817
36    43.36820
37    43.42133
38    43.42130
39    43.88590
40    43.88590
41    43.88590
42    43.84273
43    43.84270
44    43.53289
45    43.57494
46    43.57494
47    43.57490
48    44.15663
49    44.15660
50    44.18629
51    42.60260
Name: lat, dtype: float64


### magic_select¶

This example demonstrates how to select MagIC records that meet a certain criterion, like having a particular method code.

Note: to output into a MagIC formatted file, we can change the DataFrame to a list of dictionaries (with df.to_dict("records")) and use pmag.magic_write()..

In [15]:
help(pmag.magic_write)

Help on function magic_write in module pmagpy.pmag:

magic_write(ofile, Recs, file_type)
Parameters
_________
ofile : path to output file
Recs : list of dictionaries in MagIC format
file_type : MagIC table type (e.g., specimens)

Return :
[True,False] : True if successful
ofile : same as input

Effects :
writes a MagIC formatted file from Recs


In [16]:
# read in the data file
# pick out the desired data
method_key='method_codes' # change to method_codes for data model 3
spec_df=spec_df[spec_df.method_codes.str.contains('LP-DIR-AF')]
specs=spec_df.to_dict('records') # export to list of dictionaries
success,ofile=pmag.magic_write('data_files/magic_select/AF_specimens.txt',specs,'pmag_specimens') # specimens for data model 3.0

76  records written to file  data_files/magic_select/AF_specimens.txt


### sites_extract¶

It is frequently desirable to format tables for publications from the MagIC formatted files. This example is for the sites.txt formatted file. It will create a site information table with the location and age information, and directions and/or intenisty summary tables. The function to call is ipmag.sites_extract().

In [17]:
help(ipmag.sites_extract)

Help on function sites_extract in module pmagpy.ipmag:

sites_extract(site_file='sites.txt', directions_file='directions.xls', intensity_file='intensity.xls', info_file='site_info.xls', output_dir_path='.', input_dir_path='', latex=False)
Extracts directional and/or intensity data from a MagIC 3.0 format sites.txt file.
Default output format is an Excel file.
Optional latex format longtable file which can be uploaded to Overleaf or
typeset with latex on your own computer.

Parameters
___________
site_file : str
input file name
directions_file : str
output file name for directional data
intensity_file : str
output file name for intensity data
site_info : str
output file name for site information (lat, lon, location, age....)
output_dir_path : str
path for output files
input_dir_path : str
path for intput file if different from output_dir_path (default is same)
latex : boolean
if True, output file should be latex formatted table with a .tex ending

Return :
[True,False], error type : True if successful

Effects :
writes Excel or LaTeX formatted tables for use in publications



Here is an example for how to create Latex files:

In [18]:
#latex way:
ipmag.sites_extract(directions_file='directions.tex',intensity_file='intensities.tex',
output_dir_path='data_files/3_0/McMurdo',info_file='site_info.tex',latex=True)

Out[18]:
(True,
['/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/site_info.tex',
'/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/intensities.tex',
'/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/directions.tex'])

And here is how to create Excel files:

In [19]:
#xls way:
if has_xlwt:
print(ipmag.sites_extract(output_dir_path='data_files/3_0/McMurdo'))

(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/site_info.xls', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/intensity.xls', '/Users/nebula/Python/PmagPy/data_files/3_0/McMurdo/directions.xls'])


### criteria_extract¶

This example is for the criteria.txt formatted file. It will create a criteria table suitable for publication in either LaTex or .csv format. The function to call is ipmag.criteria_extract().

In [20]:
help(ipmag.criteria_extract)

Help on function criteria_extract in module pmagpy.ipmag:

criteria_extract(crit_file='criteria.txt', output_file='criteria.xls', output_dir_path='.', input_dir_path='', latex=False)
Extracts criteria  from a MagIC 3.0 format criteria.txt file.
Default output format is an Excel file.
typeset with latex on your own computer.

Parameters
___________
crit_file : str, default "criteria.txt"
input file name
output_file : str, default "criteria.xls"
output file name
output_dir_path : str, default "."
output file directory
input_dir_path : str, default ""
path for intput file if different from output_dir_path (default is same)
latex : boolean, default False
if True, output file should be latex formatted table with a .tex ending

Return :
[True,False],  data table error type : True if successful

Effects :
writes xls or latex formatted tables for use in publications


In [21]:
# latex way:
ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo',
latex=True,output_file='criteria.tex',)

Out[21]:
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/criteria.tex'])
In [22]:
#xls way:
if has_xlwt:
print(ipmag.criteria_extract(output_dir_path='data_files/3_0/Megiddo'))

(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/criteria.xls'])


### specimens_extract¶

Similarly, it is useful to make tables for specimen (intensity) data to include in publications. Here are examples using a specimens.txt file.

In [23]:
help(ipmag.specimens_extract)

Help on function specimens_extract in module pmagpy.ipmag:

specimens_extract(spec_file='specimens.txt', output_file='specimens.xls', landscape=False, longtable=False, output_dir_path='.', input_dir_path='', latex=False)
Extracts specimen results  from a MagIC 3.0 format specimens.txt file.
Default output format is an Excel file.
typeset with latex on your own computer.

Parameters
___________
spec_file : str, default "specimens.txt"
input file name
output_file : str, default "specimens.xls"
output file name
landscape : boolean, default False
if True output latex landscape table
longtable : boolean
if True output latex longtable
output_dir_path : str, default "."
output file directory
input_dir_path : str, default ""
path for intput file if different from output_dir_path (default is same)
latex : boolean, default False
if True, output file should be latex formatted table with a .tex ending

Return :
[True,False],  data table error type : True if successful

Effects :
writes xls or latex formatted tables for use in publications


In [24]:
#latex way:
ipmag.specimens_extract(output_file='specimens.tex',landscape=True,
output_dir_path='data_files/3_0/Megiddo',latex=True,longtable=True)

Out[24]:
(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/specimens.tex'])
In [25]:
#xls way:
if has_xlwt:
print(ipmag.specimens_extract(output_dir_path='data_files/3_0/Megiddo'))

(True, ['/Users/nebula/Python/PmagPy/data_files/3_0/Megiddo/specimens.xls'])


## Contributions¶

Here are some useful functions for working with MagIC data model 3.0 contributions.

This program unpacks the .txt files downloaded from the MagIC database into individual text files. It has an option to also separate files for each location

As an example, go to the MagIC data base at http://earthref.org/MAGIC/doi/10.1029/2003gc000661 and dowload the contribution. Make a folder into which you should put the downloaded txt file called MagIC_download and move the file into it. Now use the program download_magic to unpack the .txt file (magic_contribution_16533.txt).

In [26]:
help(ipmag.download_magic)

Help on function download_magic in module pmagpy.ipmag:

takes the name of a text file downloaded from the MagIC database and
that you are doing everything in your current directory. if not, you may
provide optional arguments dir_path (where you want the results to go) and
input_dir_path (where the downloaded file is IF that location is different from
dir_path).

Parameters
----------
infile : str
MagIC-format file to unpack
dir_path : str
output directory (default ".")
input_dir_path : str, default ""
path for intput file if different from output_dir_path (default is same)
overwrite: bool
overwrite current directory (default False)
print_progress: bool
verbose output (default True)
data_model : float
MagIC data model 2.5 or 3 (default 3)
separate_locs : bool
create a separate directory for each location (Location_*)
(default False)


In [27]:
ipmag.download_magic(infile='magic_contribution_16533.txt',\

working on:  'contribution'
working on:  'locations'
working on:  'sites'
working on:  'samples'
working on:  'specimens'
working on:  'measurements'
working on:  'criteria'
working on:  'ages'

Out[27]:
True

You could look at these data with dmag_magic for example...

In [28]:
help(ipmag.upload_magic)

Help on function upload_magic in module pmagpy.ipmag:

upload_magic(concat=False, dir_path='.', dmodel=None, vocab='', contribution=None, input_dir_path='')
Finds all magic files in a given directory, and compiles them into an
Parameters
----------
concat : boolean where True means do concatenate to upload.txt file in dir_path,
False means write a new file (default is False)
dir_path : string for input/output directory (default ".")
dmodel : pmagpy data_model.DataModel object,
if not provided will be created (default None)
vocab : pmagpy controlled_vocabularies3.Vocabulary object,
if not provided will be created (default None)
contribution : pmagpy contribution_builder.Contribution object, if not provided will be created
in directory (default None)
input_dir_path : str, default ""
path for intput files if different from output dir_path (default is same)

Returns
----------
tuple of either: (False, error_message, errors, all_failing_items)
if there was a problem creating/validating the upload file
or: (filename, '', None, None) if the file creation was fully successful.


In [29]:
ipmag.upload_magic(dir_path='data_files/download_magic',concat=True)

-I- Removing old error files from /Users/nebula/Python/PmagPy/data_files/download_magic: locations_errors.txt, samples_errors.txt, specimens_errors.txt, sites_errors.txt, ages_errors.txt, measurements_errors.txt, criteria_errors.txt, contribution_errors.txt, images_errors.txt
-W- Column 'core_depth' isn't in samples table, skipping it
-W- Column 'composite_depth' isn't in samples table, skipping it
-W- Invalid or missing column names, could not propagate columns
-I- ages file successfully read in
-I- Validating ages
-I- No row errors found!
-I- 20 records written to ages file
-I- contribution file successfully read in
-I- dropping these columns: version from the contribution table
-I- Validating contribution
-W- these rows have problems: row: 0, name: 0
-W- these columns contain bad values: data_model_version
-I- 1 records written to contribution file
-I- criteria file successfully read in
-I- Validating criteria
-I- No row errors found!
-I- 20 records written to criteria file
-I- locations file successfully read in
-I- Validating locations
-I- No row errors found!
-I- 3 records written to locations file
-I- measurements file successfully read in
-I- Validating measurements
-I- No row errors found!
-I- 3072 records written to measurements file
-I- samples file successfully read in
-I- Validating samples
-I- No row errors found!
-I- 271 records written to samples file
-I- sites file successfully read in
-I- Validating sites
-I- No row errors found!
-I- 52 records written to sites file
-I- specimens file successfully read in
-I- Validating specimens
-I- No row errors found!
-I- 225 records written to specimens file
-W- These tables have errors: contribution
-W- validation of upload file has failed.
but you will need to fix the above errors before your contribution can be activated.

Out[29]:
(False,
['contribution'],
{'contribution': {'rows':    num                   value_pass_data_model_version_cv  \
0    0  "3" is not in controlled vocabulary for magic_...

issues
0  {'value_pass_data_model_version_cv': '"3" is n...  ,
'missing_columns': [],
'missing_groups': Index([], dtype='object')}})

If this were your own study, you could now go to https://earthref.org/MagIC and upload your contribution to a Private Workspace, validate, assign a DOI and activate!

MagIC data model 3 took out redundant columns in the MagIC tables so the hierarchy of specimens (in the measurements and specimens tables) up to samples, sites and locations is lost. To put these back into the measurement table, we have the function cb.add_sites_to_meas_table(), which is super handy when data analysis requires it.

In [30]:
help(cb.add_sites_to_meas_table)

Help on function add_sites_to_meas_table in module pmagpy.contribution_builder:

Add site columns to measurements table (e.g., to plot intensity data),
or generate an informative error message.

Parameters
----------
dir_path : str
directory with data files

Returns
----------
status : bool
True if successful, else False
data : pandas DataFrame
measurement data with site/sample


In [31]:
status,meas_df=cb.add_sites_to_meas_table('data_files/3_0/McMurdo')
meas_df.columns

Out[31]:
Index(['experiment', 'specimen', 'measurement', 'dir_csd', 'dir_dec',
'dir_inc', 'hyst_charging_mode', 'hyst_loop', 'hyst_sweep_rate',
'treat_ac_field', 'treat_ac_field_dc_off', 'treat_ac_field_dc_on',
'treat_ac_field_decay_rate', 'treat_dc_field', 'treat_dc_field_ac_off',
'treat_dc_field_ac_on', 'treat_dc_field_decay_rate',
'treat_dc_field_phi', 'treat_dc_field_theta', 'treat_mw_energy',
'treat_mw_integral', 'treat_mw_power', 'treat_mw_time',
'treat_step_num', 'treat_temp', 'treat_temp_dc_off', 'treat_temp_dc_on',
'treat_temp_decay_rate', 'magn_mass', 'magn_moment', 'magn_volume',
'citations', 'instrument_codes', 'method_codes', 'quality', 'standard',
'meas_field_ac', 'meas_field_dc', 'meas_freq', 'meas_n_orient',
'meas_orient_phi', 'meas_orient_theta', 'meas_pos_x', 'meas_pos_y',
'meas_pos_z', 'meas_temp', 'meas_temp_change', 'analysts',
'description', 'software_packages', 'timestamp', 'magn_r2_det',
'magn_x_sigma', 'magn_xyz_sigma', 'magn_y_sigma', 'magn_z_sigma',
'susc_chi_mass', 'susc_chi_qdr_mass', 'susc_chi_qdr_volume',
'susc_chi_volume', 'sequence', 'sample', 'site'],
dtype='object')

### cb.get_intensity_col¶

The MagIC data model has several different forms of magnetization with different normalizations (moment, volume, or mass). So to find the one used in a particular measurements table we can use this handy function.

In [32]:
help(cb.get_intensity_col)

Help on function get_intensity_col in module pmagpy.contribution_builder:

get_intensity_col(data)
Check measurement dataframe for intensity columns 'magn_moment', 'magn_volume', 'magn_mass','magn_uncal'.
Return the first intensity column that is in the dataframe AND has data.

Parameters
----------
data : pandas DataFrame

Returns
---------
str
intensity method column or ""


In [33]:
magn_col=cb.get_intensity_col(meas_df)
print (magn_col)

magn_moment


# Conversion Scripts¶

## convert_2_magic¶

We imported this module as convert. It provides many functions for creating MagIC format files from non-MagIC formats. The MagIC formatted files can then be used with PmagPy programs and uploaded to the MagIC database. Let's take a look at the options:

### _2g_asc_magic¶

This conversion has not been written yet. If you have this file format and wish to convert it to the MagIC file format, please let us know.

### _2g_bin_magic¶

To convert the binary formatted 2G Enterprises measurement files, we can use the function convert._2g_bin() in the convert_2_magic module (imported as convert).

In [34]:
help(convert._2g_bin)

Help on function _2g_bin in module pmagpy.convert_2_magic:

_2g_bin(dir_path='.', mag_file='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', or_con='3', specnum=0, samp_con='2', corr='1', gmeths='FS-FD:SO-POM', location='unknown', inst='', user='', noave=False, input_dir='', lat='', lon='')
Convert 2G binary format file to MagIC file(s)

Parameters
----------
dir_path : str
output directory, default "."
mag_file : str
input file name
meas_file : str
output measurement file name, default "measurements.txt"
spec_file : str
output specimen file name, default "specimens.txt"
samp_file: str
output sample file name, default "samples.txt"
site_file : str
output site file name, default "sites.txt"
loc_file : str
output location file name, default "locations.txt"
or_con : number
orientation convention, default '3', see info below
specnum : int
number of characters to designate a specimen, default 0
samp_con : str
sample/site naming convention, default '2', see info below
corr: str
default '1'
gmeths : str
sampling method codes, default "FS-FD:SO-POM", see info below
location : str
location name, default "unknown"
inst : str
instrument, default ""
user : str
user name, default ""
noave : bool
do not average duplicate measurements, default False (so by default, DO average)
input_dir : str
input file directory IF different from dir_path, default ""
lat : float
latitude, default ""
lon : float
longitude, default ""

Returns
---------
Tuple : (True or False indicating if conversion was sucessful, meas_file name written)

Info
----------
Orientation convention:
[1] Lab arrow azimuth= mag_azimuth; Lab arrow dip=-field_dip
i.e., field_dip is degrees from vertical down - the hade [default]
[2] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = -field_dip
i.e., mag_azimuth is strike and field_dip is hade
[3] Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip
i.e.,  lab arrow same as field arrow, but field_dip was a hade.
[4] lab azimuth and dip are same as mag_azimuth, field_dip
[5] lab azimuth is same as mag_azimuth,lab arrow dip=field_dip-90
[6] Lab arrow azimuth = mag_azimuth-90; Lab arrow dip = 90-field_dip
[7] all others you will have to either customize your
self or e-mail [email protected] for help.

Sample naming convention:
[1] XXXXY: where XXXX is an arbitrary length site designation and Y
is the single character sample designation.  e.g., TG001a is the
first sample from site TG001.    [default]
[2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)
[3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)
[4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX
[5] site name = sample name
[6] site name entered in site_name column in the orient.txt format input file  -- NOT CURRENTLY SUPPORTED
[7-Z] [XXX]YYY:  XXX is site designation with Z characters from samples  XXXYYY

Sampling method codes:
FS-FD field sampling done with a drill
FS-H field sampling done with hand samples
FS-LOC-GPS  field location done with GPS
FS-LOC-MAP  field location done with map
SO-POM   a Pomeroy orientation device was used
SO-ASC   an ASC orientation device was used
SO-MAG   orientation with magnetic compass
SO-SUN   orientation with sun compass


In [35]:
# set the input directory
input_dir='data_files/convert_2_magic/2g_bin_magic/mn1/'
mag_file='mn001-1a.dat'
convert._2g_bin(mag_file=mag_file,input_dir=input_dir,dir_path=input_dir)

importing  mn001-1a
adding measurement column to measurements table!
-I- writing measurements records to /Users/nebula/Python/PmagPy/measurements.txt
-I- 19 records written to measurements file
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/specimens.txt
-I- 1 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/samples.txt
-I- 1 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/sites.txt
-I- 1 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/2g_bin_magic/mn1/measurements.txt
-I- 19 records written to measurements file

Out[35]:
(True, 'measurements.txt')

These are measurement data for a single specimen, so we can take a quickie look at the data in an equal area projection.

In [36]:
help(ipmag.plot_di)

Help on function plot_di in module pmagpy.ipmag:

plot_di(dec=None, inc=None, di_block=None, color='k', marker='o', markersize=20, legend='no', label='', title='', edge='')
Plot declination, inclination data on an equal area plot.

Before this function is called a plot needs to be initialized with code that looks
something like:
>fignum = 1
>plt.figure(num=fignum,figsize=(10,10),dpi=160)
>ipmag.plot_net(fignum)

Required Parameters
-----------
dec : declination being plotted
inc : inclination being plotted

or

di_block: a nested list of [dec,inc,1.0]
(di_block can be provided instead of dec, inc in which case it will be used)

Optional Parameters (defaults are used if not specified)
-----------
color : the default color is black. Other colors can be chosen (e.g. 'r')
marker : the default marker is a circle ('o')
markersize : default size is 20
label : the default label is blank ('')
legend : the default is no legend ('no'). Putting 'yes' will plot a legend.
edge : marker edge color - if blank, is color of marker


In [37]:
meas_df=pd.read_csv(input_dir+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])


### agm_magic¶

This program converts Micromag hysteresis files into MagIC formatted files. Because this program creates files for uploading to the MagIC database, specimens should also have sample/site/location information, which can be provided on the command line. If this information is not available, for example if this is a synthetic specimen, specify syn= True for synthetic name.

Someone named Lima Tango has measured a synthetic specimen named myspec for hysteresis and saved the data in a file named agm_magic_example.agm in the agm_magic/agm_directory folder. The backfield IRM curve for the same specimen was saved in same directory as agm_magic_example.irm. Use the function convert.agm() to convert the data into a measurements.txt output file. For the backfield IRM file, set the keyword "bak" to True. These were measured using cgs units, so be sure to set the units key word argument properly. Combine the two output files together using the instructions for combine_magic. The agm files can be plotted using hysteresis_magic but the back-field plots are broken.

In [38]:
help(convert.agm)

Help on function agm in module pmagpy.convert_2_magic:

agm(agm_file, dir_path='.', input_dir_path='', meas_outfile='', spec_outfile='', samp_outfile='', site_outfile='', loc_outfile='', spec_infile='', samp_infile='', site_infile='', specimen='', specnum=0, samp_con='1', location='unknown', instrument='', institution='', bak=False, syn=False, syntype='', units='cgs', fmt='new', user='')
Convert AGM format file to MagIC file(s)

Parameters
----------
agm_file : str
input file name
dir_path : str
working directory, default "."
input_dir_path : str
input file directory IF different from dir_path, default ""
meas_outfile : str
output measurement file name, default ""
(default output is SPECNAME.magic)
spec_outfile : str
output specimen file name, default ""
(default output is SPEC_specimens.txt)
samp_outfile: str
output sample file name, default ""
(default output is SPEC_samples.txt)
site_outfile : str
output site file name, default ""
(default output is SPEC_sites.txt)
loc_outfile : str
output location file name, default ""
(default output is SPEC_locations.txt)
samp_infile : str
existing sample infile (not required), default ""
site_infile : str
existing site infile (not required), default ""
specimen : str
specimen name, default ""
(default is to take base of input file name, e.g. SPEC.agm)
specnum : int
number of characters to designate a specimen, default 0
samp_con : str
sample/site naming convention, default '1', see info below
location : str
location name, default "unknown"
instrument : str
instrument name, default ""
institution : str
institution name, default ""
bak : bool
IRM backfield curve, default False
syn : bool
synthetic, default False
syntype : str
synthetic type, default ""
units : str
units, default "cgs"
fmt: str
input format, options: ('new', 'old', 'xy', default 'new')
user : user name

Returns
---------
Tuple : (True or False indicating if conversion was sucessful, meas_file name written)

Info
--------
Sample naming convention:
[1] XXXXY: where XXXX is an arbitrary length site designation and Y
is the single character sample designation.  e.g., TG001a is the
first sample from site TG001.    [default]
[2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)
[3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)
[4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX
[5] site name = sample name
[6] site name entered in site_name column in the orient.txt format input file  -- NOT CURRENTLY SUPPORTED
[7-Z] [XXX]YYY:  XXX is site designation with Z characters from samples  XXXYYY


In [39]:
convert.agm('agm_magic_example.agm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='agm.magic')

-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_specimens.txt
-I- 1 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/samples.txt
-I- 1 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/sites.txt
-I- 1 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/agm.magic
-I- 284 records written to measurements file

Out[39]:
(True, 'agm.magic')
In [40]:
convert.agm('agm_magic_example.irm',dir_path='data_files/convert_2_magic/agm_magic/',
specimen='myspec',fmt='old',meas_outfile='irm.magic')

-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_specimens.txt
-I- 1 records written to specimens file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/samples.txt
-I- 1 records written to samples file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/sites.txt
-I- 1 records written to sites file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/myspec_locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/irm.magic
-I- 41 records written to measurements file

Out[40]:
(True, 'irm.magic')
In [41]:
infiles=['data_files/convert_2_magic/agm_magic/agm.magic','data_files/convert_2_magic/agm_magic/irm.magic']
ipmag.combine_magic(infiles,'data_files/convert_2_magic/agm_magic/measurements.txt')

-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/measurements.txt
-I- 325 records written to measurements file

Out[41]:
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/agm_magic/measurements.txt'

We can look at these data using hysteresis_magic:

In [42]:
# read in the measurements data
# pick out the hysteresis data using the method code for hysteresis lab protocol
hyst_data=meas_data[meas_data.method_codes.str.contains('LP-HYS')]

# make the dictionary for figures that pmagplotlib likes

# make a list of specimens
specimens=hyst_data.specimen.unique()
cnt=1
for specimen in specimens:
HDD={'hyst':cnt,'deltaM':cnt+1,'DdeltaM':cnt+2}
spec_data=hyst_data[hyst_data.specimen==specimen]
# make a list of the field data
B=spec_data.meas_field_dc.tolist()
# make a list o the magnetizaiton data
M=spec_data.magn_moment.tolist()
# call the plotting function
hpars=pmagplotlib.plot_hdd(HDD,B,M,specimen)
hpars['specimen']=specimen
# print out the hysteresis parameters
print (specimen,': \n',hpars)
cnt+=3

myspec :
{'hysteresis_xhf': '1.77e-05', 'hysteresis_ms_moment': '2.914e+01', 'hysteresis_mr_moment': '5.493e+00', 'hysteresis_bc': '2.195e-02', 'hysteresis_bcr': '6.702e-02', 'magic_method_codes': 'LP-BCR-HDM', 'specimen': 'myspec'}


### bgc_magic¶

Here we convert the Berkeley Geochronology Center's AutoCore format to MagIC use convert.bgc().

In [43]:
help(convert.bgc)

Help on function bgc in module pmagpy.convert_2_magic:

bgc(mag_file, dir_path='.', input_dir_path='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', append=False, location='unknown', site='', samp_con='1', specnum=0, meth_code='LP-NO', volume=12, user='', timezone='US/Pacific', noave=False)
Convert BGC format file to MagIC file(s)

Parameters
----------
mag_file : str
input file name
dir_path : str
working directory, default "."
input_dir_path : str
input file directory IF different from dir_path, default ""
meas_file : str
output measurement file name, default "measurements.txt"
spec_file : str
output specimen file name, default "specimens.txt"
samp_file: str
output sample file name, default "samples.txt"
site_file : str
output site file name, default "sites.txt"
loc_file : str
output location file name, default "locations.txt"
append : bool
append output files to existing files instead of overwrite, default False
location : str
location name, default "unknown"
site : str
site name, default ""
samp_con : str
sample/site naming convention, default '1', see info below
specnum : int
number of characters to designate a specimen, default 0
meth_code : str
orientation method codes, default "LP-NO"
e.g. [SO-MAG, SO-SUN, SO-SIGHT, ...]
volume : float
volume in ccs, default 12.
user : str
user name, default ""
timezone : str
timezone in pytz library format, default "US/Pacific"
list of timezones can be found at http://pytz.sourceforge.net/
noave : bool
do not average duplicate measurements, default False (so by default, DO average)

Returns
---------
Tuple : (True or False indicating if conversion was sucessful, meas_file name written)

Info
--------
Sample naming convention:
[1] XXXXY: where XXXX is an arbitrary length site designation and Y
is the single character sample designation.  e.g., TG001a is the
first sample from site TG001.    [default]
[2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)
[3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)
[4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX
[5] site name same as sample
[6] site is entered under a separate column -- NOT CURRENTLY SUPPORTED
[7-Z] [XXXX]YYY:  XXXX is site designation with Z characters with sample name XXXXYYYY


In [44]:
dir_path='data_files/convert_2_magic/bgc_magic/'
convert.bgc('15HHA1-2A',dir_path=dir_path)

mag_file in bgc_magic /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/15HHA1-2A
adding measurement column to measurements table!
-I- overwriting /Users/nebula/Python/PmagPy/measurements.txt
-I- 21 records written to measurements file
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/specimens.txt
-I- 1 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/samples.txt
-I- 1 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/sites.txt
-I- 1 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/measurements.txt
-I- 21 records written to measurements file

Out[44]:
(True,
'/Users/nebula/Python/PmagPy/data_files/convert_2_magic/bgc_magic/measurements.txt')

And let's take a look

In [45]:
meas_df=pd.read_csv(dir_path+'measurements.txt',sep='\t',header=1)
ipmag.plot_net(1)
ipmag.plot_di(dec=meas_df['dir_dec'],inc=meas_df['dir_inc'])


### cit_magic¶

To convert the CalTech format to MagIC, use convert.cit().

Craig Jones’ PaleoMag software package (http://cires.colorado.edu/people/jones.craig/PMag3.html) imports various file formats, including the ’CIT’ format developed for the Caltech lab and now used in magnetometer control software that ships with 2G magnetometers that utilized a vertical sample changer system. The documentation for the CIT sample format is here: http://cires.colorado.edu/people/jones.craig/PMag_Formats.html#SAM_format. Demagnetization data for each specimen are in their own file in a directory with all the data for a site or study. These files are strictly formatted with fields determined by the character number in the line. There must be a file with the suffix ‘.sam’ in the same directory as the specimen data files which gives details about the specimens and a list of the specimen measurementfiles in the directory.

The first line in the .sam file is a comment (in this case the site name), the second is the latitude and longitude followed by a declination correction. In these data, the declination correction was applied to the specimen orientations so the value of the declination correction is set to be 0.

For detailed description of the .sam and sample file formats, check the PaleoMag Formats website linked to above.

In [46]:
help(convert.cit)

Help on function cit in module pmagpy.convert_2_magic:

cit(dir_path='.', input_dir_path='', magfile='', user='', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', locname='unknown', sitename='', methods=['SO-MAG'], specnum=0, samp_con='3', norm='cc', oersted=False, noave=False, meas_n_orient='8', labfield=0, phi=0, theta=0)
Converts CIT formated Magnetometer data into MagIC format for Analysis and contribution to the MagIC database

Parameters
-----------
dir_path : directory to output files to (default : current directory)
input_dir_path : directory to input files (only needed if different from dir_path!)
magfile : magnetometer file (.sam) to convert to MagIC (required)
user : colon delimited list of analysts (default : "")
meas_file : measurement file name to output (default : measurements.txt)
spec_file : specimen file name to output (default : specimens.txt)
samp_file : sample file name to output (default : samples.txt)
site_file : site file name to output (default : site.txt)
loc_file : location file name to output (default : locations.txt)
locname : location name
sitename : site name set by user instead of using samp_con
methods : colon delimited list of sample method codes. full list here (https://www2.earthref.org/MagIC/method-codes) (default : SO-MAG)
specnum : number of terminal characters that identify a specimen
norm : is volume or mass normalization using cgs or si units (options : cc,m3,g,kg) (default : cc)
oersted : demag step values are in Oersted
noave : average measurement data or not. False is average, True is don't average. (default : False)
samp_con : sample naming convention options as follows:
[1] XXXXY: where XXXX is an arbitrary length site designation and Y
is the single character sample designation.  e.g., TG001a is the
first sample from site TG001.    [default]
[2] XXXX-YY: YY sample from site XXXX (XXX, YY of arbitary length)
[3] XXXX.YY: YY sample from site XXXX (XXX, YY of arbitary length)
[4-Z] XXXX[YYY]:  YYY is sample designation with Z characters from site XXX
[5] site name = sample name
[6] site name entered in sitename column in the orient.txt format input file  -- NOT CURRENTLY SUPPORTED
[7-Z] [XXX]YYY:  XXX is site designation with Z characters from samples  XXXYYY
meas_n_orient : Number of different orientations in measurement (default : 8)
labfield : DC_FIELD in microTesla (default : 0)
phi : DC_PHI in degrees (default : 0)
theta : DC_THETA in degrees (default : 0)

Returns
-----------
type - Tuple : (True or False indicating if conversion was sucessful, meas_file name written)



Use the function convert.cit() to covert the CIT data files from Swanson-Hysell lab at Berkeley for the PI47 site in the data_files/convert_2_magic/cit_magic/PI47 directory. The site (PI47) was part of a data set published in Fairchild et al., (2016) (available in the MagIC database: (https://earthref.org/MagIC/11292/). The location name was “Slate Islands”, the naming convention was #2, the specimen name is specified with 1 character, we don’t wish to average replicate measurements and they were collected by drilling and with a magnetic compass (”FS-FD",and "SO-MAG”).

In [47]:
dir_path='data_files/convert_2_magic/cit_magic/PI47/'
convert.cit(dir_path=dir_path,
magfile='PI47-.sam',locname="Slate Islands",specnum=1,samp_con='2',
methods=['FS-FD','SO-MAG'],noave=True)

PI47-

Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
adding measurement column to measurements table!
-I- overwriting /Users/nebula/Python/PmagPy/measurements.txt
-I- 266 records written to measurements file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/specimens.txt
-I- 9 records written to specimens file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/samples.txt
-I- 9 records written to samples file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/sites.txt
-I- 1 records written to sites file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/locations.txt
-I- 1 records written to locations file
-I- overwriting /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/PI47/measurements.txt
-I- 266 records written to measurements file

Out[47]:
(True, 'measurements.txt')

We can make some Zijderveld diagrams (see zeq_magic).

In [48]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)

Out[48]:
(True, [])

Use the function convert.cit() to covert the CIT data files from the USGS lab at Menlo Park. The data file is in the data_files/convert_2_magic/cit_magic/USGS/bl9-1 directory, the file name is bl9-1.sam, and the analyst was Hagstrum. The location name was “Boring volcanic field”, and this site name was set by Hagstrum to BL9001 because the site name cannot be determined from the sample name with the current available options. The samples were collected by drilling and with a magnetic compass and sun compass (”FS-FD",and "SO-MAG”), the measurement are in Oersted instead of the standard milliTesla, and we don’t wish to average replicate measurements.

In [49]:
dir_path='data_files/convert_2_magic/cit_magic/USGS/bl9-1'
convert.cit(dir_path=dir_path,
magfile='bl9-1.sam',user='Hagstrum',locname="Boring volcanic field",
sitename='BL9001',methods=['FS-FD','SO-SM','LT-AF-Z'], oersted=True,
noave=True)

Boring Lava collection 2009

Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
adding measurement column to measurements table!
-I- overwriting /Users/nebula/Python/PmagPy/measurements.txt
-I- 63 records written to measurements file
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/specimens.txt
-I- 9 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/samples.txt
-I- 9 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/sites.txt
-I- 1 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/USGS/bl9-1/measurements.txt
-I- 63 records written to measurements file

Out[49]:
(True, 'measurements.txt')

We can look at the Zijderveld, etc. Diagrams with zeq_magic.

In [50]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)

Out[50]:
(True, [])

Use the function convert.cit() to convert the CIT data files from Ben Wiess's lab at MIT. This data was part of a set published in ESPL. "A nonmagnetic differentiated early planetary body", doi:10.1016/j.epsl.2017.03.026 The data can be found in MagIC at https://earthref.org/MagIC/11943

The data file is in the data_files/convert_2_magic/cit_magic/MIT/7325B directory, the file name is 7325B.sam, and the analyst was Wiess. The location name was “NWA 7325” with the site name coming from the sample name with the "1" convention. The samples are described with the method codes DE-VM, LP-DIR-T, LT-AF-Z, LT-NO, LT-T-Z, and SO-CMD-NORTH (see https://www2.earthref.org/MagIC/method-codes for full descriptions). We also don’t wish to average replicate measurements.

In [51]:
convert.cit(dir_path='data_files/convert_2_magic/cit_magic/MIT/7325B',
magfile='7325B.sam',user='Wiess',locname="NWA 7325",samp_con='1',
methods=['DE-VM', 'LP-DIR-T', 'LT-AF-Z', 'LT-NO', 'LT-T-Z', 'SO-CMD-NORTH'],
noave=True)

NWA 7325 sample B7

Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
Warning: Specimen volume set to 1.0.
Warning: If volume/mass really is 1.0, set volume/mass to 1.001
Warning: specimen method code LP-NOMAG set.
adding measurement column to measurements table!
-I- overwriting /Users/nebula/Python/PmagPy/measurements.txt
-I- 309 records written to measurements file
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/specimens.txt
-I- 9 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/samples.txt
-I- 9 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/sites.txt
-I- 1 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/cit_magic/MIT/7325B/measurements.txt
-I- 309 records written to measurements file

Out[51]:
(True, 'measurements.txt')

And take a look see:

In [52]:
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)

Out[52]:
(True, [])

### generic_magic¶

If you have a data file format that is not supported, you can relabel column headers to fit the generic format as in the generic_magic example data file.

To import the generic file format, use convert.generic().

In [53]:
help(convert.generic)

Help on function generic in module pmagpy.convert_2_magic:

generic(magfile='', dir_path='.', meas_file='measurements.txt', spec_file='specimens.txt', samp_file='samples.txt', site_file='sites.txt', loc_file='locations.txt', user='', labfield=0, labfield_phi=0, labfield_theta=0, experiment='', cooling_times_list=[], sample_nc=[1, 0], site_nc=[1, 0], location='unknown', lat='', lon='', noave=False, input_dir_path='')
Convert generic file to MagIC file(s)

Parameters
----------
mag_file : str
input file name
dir_path : str
output directory, default "."
meas_file : str
output measurement file name, default "measurements.txt"
spec_file : str
output specimen file name, default "specimens.txt"
samp_file: str
output sample file name, default "samples.txt"
site_file : str
output site file name, default "sites.txt"
loc_file : str
output location file name, default "locations.txt"
user : str
user name, default ""
labfield : float
dc lab field (in micro tesla)
labfield_phi : float
declination 0-360
labfield_theta : float
inclination -90 - 90
experiment : str
experiment type, see info below
cooling_times_list : list
cooling times in [K/minutes] seperated by comma,
ordered at the same order as XXX.10,XXX.20 ...XX.70
sample_nc : list
sample naming convention, default [1, 0], see info below
site_nc : list
site naming convention, default [1, 0], see info below
location : str
location name, default "unknown"
lat : float
latitude, default ""
lon : float
longitude, default ""
noave : bool
do not average duplicate measurements, default False (so by default, DO average)
input_dir_path : str
input file directory IF different from dir_path, default ""

Info
--------
Experiment type:
Demag:
AF and/or Thermal
PI:
paleointenisty thermal experiment (ZI/IZ/IZZI)
ATRM n:

ATRM in n positions (n=6)

AARM n:
AARM in n positions
CR:
cooling rate experiment
The treatment coding of the measurement file should be: XXX.00,XXX.10, XXX.20 ...XX.70 etc. (XXX.00 is optional)
where XXX in the temperature and .10,.20... are running numbers of the cooling rates steps.
XXX.00 is optional zerofield baseline. XXX.70 is alteration check.
if using this type, you must also provide cooling rates in [K/minutes] in cooling_times_list
seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70

No need to specify the cooling rate for the zerofield
But users need to make sure that there are no duplicate meaurements in the file

NLT:
non-linear-TRM experiment

Specimen-sample naming convention:
X determines which kind of convention (initial characters, terminal characters, or delimiter
Y determines how many characters to remove to go from specimen --> sample OR which delimiter to use
X=0 Y=n: specimen is distinguished from sample by n initial characters.
(example: generic(samp_nc=[0, 4], ...)
if n=4 then and specimen = mgf13a then sample = mgf13)
X=1 Y=n: specimen is distiguished from sample by n terminate characters.
(example: generic(samp_nc=[1, 1], ...))
if n=1 then and specimen = mgf13a then sample = mgf13)
X=2 Y=c: specimen is distinguishing from sample by a delimiter.
(example: generic([2, "-"]))
if c=- then and specimen = mgf13-a then sample = mgf13)
default: sample is the same as specimen name

Sample-site naming convention:
X determines which kind of convention (initial characters, terminal characters, or delimiter
Y determines how many characters to remove to go from sample --> site OR which delimiter to use
X=0 Y=n: sample is distiguished from site by n initial characters.
(example: generic(site_nc=[0, 3]))
if n=3 then and sample = mgf13 then sample = mgf)
X=1 Y=n: sample is distiguished from site by n terminate characters.
(example: generic(site_nc=[1, 2]))
if n=2 and sample = mgf13 then site = mgf)
X=2 Y=c: specimen is distiguishing from sample by a delimiter.
(example: generic(site_nc=[2, "-"]))
if c='-' and sample = 'mgf-13' then site = mgf)
default: site name is the same as sample name


In [54]:
convert.generic(magfile='data_files/convert_2_magic/generic_magic/generic_magic_example.txt',
experiment='PI',dir_path='data_files/convert_2_magic/generic_magic')

adding measurement column to measurements table!
-I- overwriting /Users/nebula/Python/PmagPy/measurements.txt
-I- 23 records written to measurements file
-I- writing specimens records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/specimens.txt
-I- 2 records written to specimens file
-I- writing samples records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/samples.txt
-I- 2 records written to samples file
-I- writing sites records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/sites.txt
-I- 2 records written to sites file
-I- writing locations records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/locations.txt
-I- 1 records written to locations file
-I- writing measurements records to /Users/nebula/Python/PmagPy/data_files/convert_2_magic/generic_magic/measurements.txt
-I- 23 records written to measurements file

Out[54]:
(True, 'measurements.txt')
In [55]:
# let's take a look
dir_path='data_files/convert_2_magic/generic_magic/'
ipmag.zeq_magic(input_dir_path=dir_path, save_plots=False)

Out[55]:
(True, [])