The FASTQ file format is commonly used to store deep sequencing reads data. It is similar to the FASTA format, but includes additional information. Each record is represented by four lines:
@
character and is followed by a sequence identifier and an optional description (like a FASTA title line).+
characterSequence quality is encoded with characters from:
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
Where !
represents the lowest quality, equivalent to a score of 1, and ~
is the highest quality with a score of 94.
The function given below translates the characters into their corresponding scores and returns a dictionary which you can use later. Make sure you understand how to work with this dictionary before proceeding. Also note the use of """
.
def creates_scores_dict():
scores_string = """!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~"""
scores_dict = {}
for s in range(len(scores_string)):
scores_dict[scores_string[s]] = s + 1
return scores_dict
scores_dict = creates_scores_dict()
The file files_for_hw/lambda_reads.fq
contains 10,000 reads from the sequencing of $\lambda$ phage. We would like to discard low quality reads. A low quality read is defined as one with a mean score lower than some predefined threshold.
a) Write a function mean_score
that receives a read quality string and returns the mean score (float) of the read.
For example, the quality string !!!!!
is equivalent to the scores 1,1,1,1,1
, and thus the mean is 1.0
.
However, the string 49@5<*>=E
is equivalent to the scores 20,25,32,21,28,10,30,29,37
and has a mean of 25.77
.
def mean_score(read_quality_string):
scores_sum = 0
for char in read_quality_string:
scores_sum += scores_dict[char]
return scores_sum/len(read_quality_string)
mean_score('49@5<*>=E')
assert(mean_score('!!!!!') == 1.0)
assert(round(mean_score('49@5<*>=E'),2) == 25.78)
b) Write a function parse_FASTQ
that parses a FASTQ file. It receives a path to a FASTQ file on the local filesystem and returns a dictionary where keys are sequences and values are the mean scores of the sequences.
Use the function on the provided file files_for_hw/lambda_reads.fq
.
It is recommended to use the readline()
method of file objects (although other solutions are also possible).
def parse_FASTQ(file):
seq_scores = {}
with open(file,'r') as f:
line = f.readline()
while line:
if line.startswith('@'): # a new record begins
seq = ''
score = -1
line = f.readline() # read next line (sequence)
seq = line.strip() # get sequence
line = f.readline()
line = f.readline() # skip the '+' line and get to the quality line
quality = line.strip()
mean_quality_score = mean_score(quality)
seq_scores[seq] = mean_quality_score
line = f.readline() # finished with record, read next
return seq_scores
# parse lambda reads file
lambda_reads_file = "files_for_hw4/lambda_reads.fq"
lambda_seqs_dict = parse_FASTQ(lambda_reads_file)
print(parse_FASTQ('sample_fastq'))
{'TTGGCAGGCCAAGGCCGATGGATCA': 25.52, 'GTTGCTTCTGGCGTGGGTGGGGGGG': 24.4, 'CCCTTCTTGTCTTCAGCGTTTCTCC': 26.28}
c) Write a function filter_reads
that takes the output from section b and a score cutoff (integer/float) and prints out the sequences with scores higher than the cutoff.
Each sequence will be printed in a separate line (no need to keep the FASTQ format). Try different cutoffs (5,10,20) on the $\lambda$ phage reads.
def filter_reads(seqs_dict,cutoff,out_file):
with open(out_file,'w') as fo:
for seq in seqs_dict:
if seqs_dict[seq] > cutoff:
print(seq, file=fo)
# run on Lambda reads
lambda_filtered_file = "files_for_hw4/lambda_filtered_reads.txt"
filter_reads(lambda_seqs_dict,10,lambda_filtered_file)
In this question, we will work on data from the Global Biodiversity Information Facility (GBIF). This server has lots of data about occurences of organisms around the world.
Our goal will be to get the coordinates of observations of endangered turtles species in the Southern hemisphere.
The CSV files under files_for_hw/gbif_files
, named GBIF1.csv
, GBIF2.csv
etc. contain observations data for various turtles genera. Each record is an observation, and contains information such as the species name and the coordinates of the observation.
a) The first step in the analysis will be to concatenate all the CSV files into one CSV file to make processing easier.
Write a function that receives a list of file names and concatenates them (i.e., inserts all records from all files into one file). Remember to include the header line only once, at the beginning.
Use the function to create a unified csv file.
Note: The function has no return value!
def append_csvs(csvs_list,out_csv):
with open(out_csv,'w') as fo:
# print first file to output file
first_csv = csvs_list[0]
with open(first_csv,'r') as f:
content = f.read()
print(content.strip(),file=fo)
# print all other files to output file, skipping headers line
for file in csvs_list[1:]:
with open(file,'r') as f:
f.readline()
for line in f:
print(line.strip(),file=fo)
# create unified file
import glob
filenames = glob.glob("files_for_hw4/gbif_files/GBIF*.csv")
print(filenames)
output_filename = "files_for_hw4/gbif_files/concat_GBIF.csv"
append_csvs(filenames, output_filename)
['files_for_hw4/gbif_files\\GBIF1.csv', 'files_for_hw4/gbif_files\\GBIF10.csv', 'files_for_hw4/gbif_files\\GBIF11.csv', 'files_for_hw4/gbif_files\\GBIF12.csv', 'files_for_hw4/gbif_files\\GBIF13.csv', 'files_for_hw4/gbif_files\\GBIF14.csv', 'files_for_hw4/gbif_files\\GBIF15.csv', 'files_for_hw4/gbif_files\\GBIF16.csv', 'files_for_hw4/gbif_files\\GBIF17.csv', 'files_for_hw4/gbif_files\\GBIF18.csv', 'files_for_hw4/gbif_files\\GBIF19.csv', 'files_for_hw4/gbif_files\\GBIF2.csv', 'files_for_hw4/gbif_files\\GBIF20.csv', 'files_for_hw4/gbif_files\\GBIF21.csv', 'files_for_hw4/gbif_files\\GBIF22.csv', 'files_for_hw4/gbif_files\\GBIF3.csv', 'files_for_hw4/gbif_files\\GBIF4.csv', 'files_for_hw4/gbif_files\\GBIF5.csv', 'files_for_hw4/gbif_files\\GBIF6.csv', 'files_for_hw4/gbif_files\\GBIF7.csv', 'files_for_hw4/gbif_files\\GBIF8.csv', 'files_for_hw4/gbif_files\\GBIF9.csv']
b) The file files_for_hw/gbif_files/endangered_turtles.txt
contains a list of endangered turtles species from The IUCN Red List of Threatened Species.
Write a function get_species
that reads the file and returns a list of the species it includes.
def get_species(filename):
species_list = []
with open(filename,'r') as f:
for line in f:
species_list.append(line.strip())
return species_list
turtles_filename = "files_for_hw4/gbif_files/endangered_turtles.txt"
endangered_turtles = get_species(turtles_filename)
print(endangered_turtles)
['Acanthochelys pallidipectoris', 'Actinemys marmorata', 'Amyda cartilaginea', 'Astrochelys radiata', 'Astrochelys yniphora', 'Batagur baska', 'Batagur borneoensis', 'Batagur dhongoka', 'Batagur kachuga', 'Batagur trivittata', 'Caretta caretta', 'Carettochelys insculpta', 'Centrochelys sulcata', 'Chelodina mccordi', 'Chelodina parkeri', 'Chelodina pritchardi', 'Chelonia mydas', 'Chelonoidis chilensis', 'Chelonoidis denticulata', 'Chelonoidis nigra', 'Chelydra rossignonii', 'Chitra chitra', 'Chitra indica', 'Clemmys guttata', 'Cuora amboinensis', 'Cuora aurocapitata', 'Cuora flavomarginata', 'Cuora galbinifrons', 'Cuora mccordi', 'Cuora mouhotii', 'Cuora pani', 'Cuora trifasciata', 'Cuora yunnanensis', 'Cuora zhoui', 'Dermatemys mawii', 'Dermochelys coriacea', 'Elseya bellii', 'Elseya branderhorsti', 'Elusor macrurus', 'Emydoidea blandingii', 'Eretmochelys imbricata', 'Erymnochelys madagascariensis', 'Geochelone gigantea', 'Geochelone platynota', 'Geoclemys hamiltonii', 'Geoemyda japonica', 'Geoemyda spengleri', 'Glyptemys insculpta', 'Glyptemys muhlenbergii', 'Gopherus agassizii', 'Gopherus flavomarginatus', 'Gopherus polyphemus', 'Graptemys barbouri', 'Graptemys caglei', 'Graptemys flavimaculata', 'Graptemys gibbonsi', 'Graptemys oculifera', 'Graptemys pearlensis', 'Hardella thurjii', 'Heosemys annandalii', 'Heosemys depressa', 'Heosemys grandis', 'Heosemys spinosa', 'Homopus solus', 'Hydromedusa maximiliani', 'Indotestudo elongata', 'Indotestudo forstenii', 'Indotestudo travancorica', 'Kinixys homeana', 'Kinosternon angustipons', 'Kinosternon dunni', 'Lepidochelys kempii', 'Lepidochelys olivacea', 'Leucocephalon yuwonoi', 'Macrochelys temminckii', 'Malacochersus tornieri', 'Malayemys subtrijuga', 'Manouria emys', 'Manouria impressa', 'Mauremys annamensis', 'Mauremys mutica', 'Mauremys nigricans', 'Mauremys reevesii', 'Mauremys sinensis', 'Melanochelys tricarinata', 'Mesoclemmys dahli', 'Mesoclemmys hogei', 'Mesoclemmys zuliae', 'Morenia ocellata', 'Morenia petersi', 'Nilssonia formosa', 'Nilssonia gangetica', 'Nilssonia hurum', 'Nilssonia leithii', 'Notochelys platynota', 'Orlitia borneensis', 'Palea steindachneri', 'Pangshura sylhetensis', 'Pelochelys bibroni', 'Pelochelys cantorii', 'Pelodiscus sinensis', 'Peltocephalus dumerilianus', 'Pelusios broadleyi', 'Platysternon megacephalum', 'Podocnemis erythrocephala', 'Podocnemis lewyana', 'Podocnemis sextuberculata', 'Podocnemis unifilis', 'Psammobates geometricus', 'Pseudemydura umbrina', 'Pseudemys alabamensis', 'Pyxis arachnoides', 'Pyxis planicauda', 'Rafetus euphraticus', 'Rafetus swinhoei', 'Rheodytes leukops', 'Sacalia bealei', 'Sacalia quadriocellata', 'Siebenrockiella crassicollis', 'Siebenrockiella leytensis', 'Sternotherus depressus', 'Terrapene carolina', 'Terrapene coahuila', 'Testudo graeca', 'Testudo horsfieldii', 'Testudo kleinmanni', 'Trachemys adiutrix', 'Trachemys decorata', 'Trachemys gaigeae', 'Trachemys ornata', 'Trachemys taylori', 'Trachemys terrapen', 'Trachemys yaquia', 'Vijayachelys silvatica']
c) We will now use the unified CSV file and endangered species list to create a filtered CSV, containing only the records we are interested in: those of endangered species, observed in the Southern hemisphere (i.e. in latitude < 0).
Write a function that receives the unified filename, the species list, maximum latitude and output file, and write to the output file the records which satisfy these conditions (no return value is needed).
import csv
def filter_records(filename, species, max_lat, out_csv_file):
with open(out_csv_file,'w',newline='') as fo:
csv_writer = csv.writer(fo)
# write headers
csv_writer.writerow(['','name','scientificName','genus','decimalLongitude','decimalLatitude','country'])
with open(filename, 'r') as f:
csv_reader = csv.reader(f)
f.readline() # skip headers
for row in csv_reader:
spec = row[1]
latitude = row[5]
if (spec in species and latitude != 'NA' and float(latitude) < max_lat):
csv_writer.writerow(row)
filtered_file = "files_for_hw4/gbif_files/endangered.csv"
filter_records(output_filename, endangered_turtles, 0, filtered_file)
In this question, you don't have to write real code, just write the regular expression you'd use within the quotation marks.
a) Write a regex that will match strings containing any kind of number: positive/negative, integer/float etc. For example, all of the following should be matched: 7 , -3 , 6.14 , -0.00054
import re
re.compile(r'-?\d+\.?(\d+)?')
re.compile(r'-?\d+\.?(\d+)?', re.UNICODE)
b) Write a regex that will match strings that end with a number between 100 and 199, followed by a '.' or a '' character.
re.compile(r'1\d{2}[\.\\]$')
re.compile(r'1\d{2}[\.\\]$', re.UNICODE)
c) Write a regex that will match whole strings of prices in dollars, such as '100$', '2.99$', '500.90$', but not '7.656$', '80.0001$' or 'price is: 56.80$'.
re.compile(r'^\d+\.?\d{2}\$$')
re.compile(r'^\d+\.?\d{2}\$$', re.UNICODE)
d) Write a regex that will match strings beginning with 3 to 8 uppercase letters, followed by at least 4 characters, which can be anything but '%' or '!', and end with 'XY' or 'QW'.
re.compile(r'^[A-Z]{3,8}[^%!]{4,}(XY|QW)$')
re.compile(r'^[A-Z]{3,8}[^%!]{4,}(XY|QW)$', re.UNICODE)
A full scientific name of a plant species consists of a genus name, a species name and an authority (usually a short for the name of the person to first describe the species). For example, in Arabidopsis thaliana (L.) Heynh., Arabidopsis is the genus, thaliana) is the species and (L.) Heynh. is the authority.
In addition, a name may (or may not) include an infraspecific rank. This is added after the species name, and consists of an epithet which is either subsp. (for subspecies) or var. (for variety). The epithet is followed by the name of the infraspecific rank.
For example, in Fraxinus americana var. acuminata (Lam.) K.Koch, the genus is Fraxinus, the species is americana, the infraspecies is var. acuminata and the authority is (Lam.) K.Koch.
The file files_for_hw/plant_names.txt
contains a list of plant names. The goal is to break these names into their components.
Write a program that reads the names from the file and writes a new CSV file with the following column names: Genus, Species, Infraspecific and Authority.
Each plant name in plant_names.txt
should then be processed (use regular expressions) and its components inserted into the CSV file.
import csv
import re
# regex for processing plant names
name_regex = re.compile(r'^([A-Z]\w+)\s(\w+)\s((subsp\.|var\.)\s\w+\s)?(.+)$')
# files
list_file = 'files_for_hw4/plant_names.txt'
out_csv = 'files_for_hw4/parsed_plant_names.csv'
def process_plant_name(name_string):
"""
Receives a plant name and breaks it to components.
Returns a list: [genus, species,infraspecific (if not, empty string), authority]
"""
match_result = re.search(name_regex,name_string)
if match_result: # if a match was found
genus = match_result.group(1)
species = match_result.group(2)
infra = match_result.group(3)
author = match_result.group(5)
return [genus, species, infra, author]
else: # if no match
return None
with open(list_file,'r') as f:
with open(out_csv,'w',newline='') as fo:
csv_writer = csv.writer(fo)
# write csv headers
csv_writer.writerow(['Genus','Species','Infraspecific','Authority'])
# iterate on names and process
for name in f:
name = name.strip()
name_parts = process_plant_name(name)
if name_parts: # if name was processed successfully
csv_writer.writerow(name_parts)
else: # warn that something is wrong with the name
print('Name',name,'could not be processed.')
print('Parsing completed')
Parsing completed