@@ -48,7 +48,7 @@ Another way we can approach this would be to use the for-loop.
4848For loops are traditionally slower and clunkier (especially in Python).
4949However, Julia can often optimize for-loops like this,
5050which is one of the things that makes it so powerful.
51- It has multiple processing units that can run the same task parallelly .
51+ It has multiple processing units that can run the same task in parallel .
5252
5353We can calculate the Hamming Distance by looping over the characters in one of the strings
5454and checking if the corresponding character at the same index in the other string matches.
@@ -104,8 +104,8 @@ bio_hamming[1]
104104```
105105
106106``` julia
107- # Double check that we got the same values from both ouputs
108- @assert calcHamming (ex_seq_a, ex_seq_b) == bio_hamming[1 ]
107+ # Double check that we got the same values from both ouputs
108+ @assert hamming (ex_seq_a, ex_seq_b) == bio_hamming[1 ]
109109```
110110
111111
@@ -142,7 +142,7 @@ bio_hamming[1]
142142 AlignmentAnchor(17, 17, '=')]
143143 ```
144144
145- ### Distances.Jl method
145+ ### Distances.jl method
146146
147147 Another package that calculates the Hamming distance is the [ Distances package] ( https://github.com/JuliaStats/Distances.jl ) .
148148 We can call its ` hamming ` function on our two test sequences:
@@ -182,7 +182,7 @@ The BioAlignments method takes up a much larger amount of memory,
182182and nearly three times as long to run.
183183However, it also generates an ` AlignmentAnchor ` data structure each time the function is called,
184184so this is not a fair comparison.
185- The ` Distances ` package is the winner here,which makes sense,
185+ The ` Distances ` package is the winner here, which makes sense,
186186as it uses a vectorized approach.
187187
188188
0 commit comments