Swift for Programmers


Swift is Apple's friendlier language, compared to their main one, Objective-C. I think it's meant to have trendy new features from Python and C#.

Running it

Apple's XCode runs Swift through a nice IDE. The "Run" button fools you -- it attempts to run the program on your USB-connected phone. Instead press the "play" triangle. It brings up a virtual iPhone running your program.

If you only have a laptop, XCode with Swift runs far too slowly. You'll need to run Swift from the Command line. Go to the Swift page and download the extra thing it tells you. Starting it up (%swift) takes 30 seconds. But then it runs fine. There's no good way to import a file from the console. You'll have to fake it with cut&paste.


General things to know: 1) Semi-colons are optional. 2) Parens around if and loop conditions are optional. Oddly, curly-braces around the bodies are not. 3) An underscore, all by itself, is used as a "don't care" variable (in various contexts).

The preferred format for mixing strings and variables is \(exp): "value is \(x) and sum is \(3+6)".

It's funny about spacing. Assignment operators must have the same spacing on either side (n=5 or n = 5, but not n= 5). There are also funny times where lack of a space can confuse it. For example, the ?: operator requires spaces (b?1:2 is an error, but b ? 1 : 2 is fine). In that case, ! and ? have special meanings when immediately after a variable.


Declaring is var name : type, where the type is optional if we can guess it:

var a : Int
var a = 4 // inferred to be Int
var a2 : Int = 6 // ": Int" not needed, but it's fine
let w="cow" // constant, inferred to be string

All types start with caps: Int, Float, Double, String, Bool, Character.

Character literals use double-quotes. var ch : Character ="x". This does nothing useful. Chars and strings are still different and still don't mix: "x"+ch gives a "string + char" error.

Numeric literals don't have types. Their type is guessed from context. That's just nuts. 3/4 is 0, but 3/4+0.0 is 0.75. The parser looks ahead and assumes you meant 3.0 and 4.0.

Replacing var with let declares constants, which is strongly encouraged. For pointers, let is a const pointer, to non-const data:

let MAX_AGE = 30 // constant int 
let c1 : Cat = Cat(7)
c1.age=12; // this is fine
c1 = c2; // error - let is const pointer

variable/operator misc

Functions, pt-1

The return type goes at the end. Parens are optional:

func max(n1:Int, n2:Int)->(Int) { if n1>n2 { return n1 }; return n2 }

func doStuff(n:Int, name:String)->() { ... }  // no return value

The keyword func is in front. Notice how parameters use name:type the same as declaring. The return type is at the back after a ->. Use () for void.

Function calls must include the parameter names (but there's a way around it):

doStuff(n:5, name:"earl") // need to add actual parm names in front
doStuff(5,"earl") // error, no names

Because of this, function prototypes in many manuals are written with the names: doStuff(n:name:) but not the types, which is just terrible.

You can't flip the order: doStuff(name:"ted",n:3) is an error. It's a tricky rule since every other system with named-types lets you do that.

Required names can be used to distinguish overloads:

// both of these are legal:
func cost(dollars:Int)->(Double) { return dollars*5; }
func cost(spaceBucks:Int)->(Double) { return spaceBucks*2.3; }

var c = cost(spaceBucks:6) // picks out the second one

There are two ways to modify call-with-name. You can make them optional by adding an underscore before each:

func doStuff(_ n:Int, _ name:String)->() {...} // underscore = no names
doStuff(5,"earl") // now this is legal

The second option allows you to specify a different name for use inside the function. Again, it goes in front:

func doStuff(mealsNeeded m:Int, buddyName name:String)->() {
  // inside we use m and name:
  if m==5 || name==   ...

// in the call we use the other names:
doStuff(mealsNeeded:5, buddyName:"earl")

The trick makes sense -- often a function has long descriptive parameter names for the user, which you copy into shorter ones to use them. But it hardly seems worth it.

All parameters are automatically constants. If you try to change one, the error complains about changing a "let" -- remember that's the word for a constant. I gotta say, this is an OK rule. Functions tweaking their inputs cause more errors than they save time.

Call-by-reference uses inout before the type, and oddly, changes it to & in the call:

func add5(n : inout Int) { n+=5 }
var x=7; add5(n:&x)

In a function or loop, defer { ... } runs it's body just before leaving the function (and after the return value is computed):

// prints ABC, then "doing some clean-up"
func abc()->Int {
  defer { print("doing some clean up"); ... } // runs just after any exit

You can use a defer to make a while loop fake a for loop:

while i<10 {
  defer { i+=1 } // runs after body end, or after any break or continue

nullable types ("optional" types)

Swift's nullable types are a bit odd. First off, reference types start out as non-nullable. They're pointers which can never be nil. This sort of makes sense. Most reference variables never intend to be nil. They won't even move away from their initial object:

var c1 : Cat = Cat("fluffy")
c1 = nil  // ERROR, non-nullable
c1 = anotherCat  // just fine, c1 is still a pointer

An ending ? makes a type nullable. c2 is a regular Cat pointer, which can also be nil:

var c2 : Cat? = Cat("Tasha")

var c3 = c2  // c3 is implicitly a Cat?
c3 = Cat("Lou")  // totally fine
c2 = c3  // sharing Lou

The other odd rule is that you're required to use extra "warning" syntax (an !) to examine nullable objects. Not using one is an error. I think the idea is to warn "this could throw an null-ref exception":

c2!.age = 3
c2.age = 5  // ERROR -- need the ! for  nullable

c1.age = 4  // non-nullable Cat uses normal look-up

"Casting" is either a ! (which you assume won't throw an error) or nothing:

normalCat = nullableCat! // legal, hope no run-time error
nullableCat = normalCat  // completely safe

Value types can be made nullable. They won't turn into pointers -- just normal value types which can also be nil:

var n : Int? = 8
a = n! + 1 // <- the ! is required

var n2 : Int? = n  // copies 8, this is not a pointer move
n = 24  // n2 is still 8

Functions with a possible "I don't know" often return a nullable, for example array searches:

func smallest(A:[Int]) -> Int? { ... } // nil if no elements

// we must catch the result in a nullable:
var n1 : Int? = smallest(A:Nums)
var n2 = smallest(A) // n2 implicitly declared nullable

var n3 : Int = smallest(A) // ERROR. n3 can't hold possible nil

if n1 != nil { n3 = n1! }

For fun, here's a nullable array (N can be nil) with nullable values (N[0] can be nil):

var N : [Int?]? // <- nullable array of nullable ints
x = N![0]!+1 // <- dereference N, dereference [0]

Then here's a possibly nil Cat where ages can be unknown:

if c2!age! >5 ...

When to use the ! can seem confusing. The rule is: put it any place you could cause a null-reference error, and leave it out where you can't.

Having said all that, certain variables can turn off the !'s. In other words, you can ignore that last set of rules. When you declare, use Cat! instead of Cat?. It's still a nullable, but look-ups are back to normal. Officially this is only for variables that must start nil, but can never logically be that way again:

// we simply can't set the value here:
var cowButton : Button! = nil

// set-up code. Is never nil:
cowButton = getNewButton()

cowButton.text // legal, and presumably always safe

? and ! are the same type, and are interchangeable. getNewButton probably returns a Button?, which we assign to a Button!. It's fine.

If's and if shortcuts

If's have optional ()'s around the test, and require {}'s around the body, but are otherwise normal:

if n>=0 && n<=10 { print("\(n) is in range") }

Nil-fixing special ifs

The binary n1 ?? n2 operator is a shortcut for "n1, unless it's nil, then n2". It can be used as a cast from an Int? to an Int, since it always returns a value (the result is a non-nullable):

var nn : Int? // can be nil
var n : Int // cannot be nil

n = nn ?? -1 // n=nn, or -1 if nil

Notice how there's no required ! after nn. Using ?? implies you're looking it up. In fact nn! ?? -1 crashes when nn is nil, never getting to the ?? -1 part.

Obviously, the item after the ?? should be a non-nullable, with everything else a nullable, but the system doesn't enforce it. n ?? -1 is useless but legal (it will always be n). nn1 ?? nn2 ?? nn3 is legal and will crash if all are nil.

Swift also has the fashionable cleanly-abort-on-nil trick. Using ? for a field instead of ! turns crash-on-nil into answer-is-nil. Obviously, it returns a nullable. It only works on reference types (for value types, there's no need):

var age1 : Int? // nullable int
age1 = c1?.age // if c1 is nil, abort and age1 is nil
age1 = c1!.age // crash if c1 is nil

age1 = c1?.bestFriend?.age // if either is nil, no crash - age1 is nil

It can be used with the ?? trick. This gives -99 on any nil:

// a1 can be a non-nullable
a1 = c1?.bestFriend?.age ?? -99

The third silly nil-testing shortcut is a combined assign and nil-check. if let ASSIGN { ACTIONS } only runs on a non-nil assign . As you'd guess, the variable is a npn-nullable with scope only in the body:

if let a = c1?.age { total += a }
// same as: if c1 != nil { total += c1!.age }

It allows multiple assigns, and only runs if all work:

if let a1=c1?.age, a2=c2?.age { total += a1+a2 }

In a burst of silliness, a real if-test is allowed after a comma:

if let a = c1?.age, a>=0 { total += a } // add if age exists and is >=0

The fourth nil-tester attempts to declare and assign a variable, and quits the function or loop on a nil. The syntax is meant to imply "if this assignment works, keep going, else quit":

guard let ca = c1?.age else { return }
// if we reach this point, ca is a declared Int (non-nullable)

// same as:
if c1==nil { return }
let ca = c1!.age

The body is required to be a return or loop-break/continue. For extra fun, it also allows the same extra if-test as an if-let. The test is for when to continue:

guard let ca = c1?.age,  ca>=0 else { return }

// same as:
if c1==nil { return }
let ca=c1!.age
if ca<0 { return }

To sum up, Swift has 4 trivial shortcuts to avoid writing if c1==nil.

Reference modifiers for garbage collection

Swift uses reference counting for garbage collection. When the last pointer to an object is moved away, the count goes to 0 and the object instantly deletes itself. As a review, this method has a problem with pointer-loops - if 2 Cats are each other's best friends, they keep each other from being deleted, even after you lose all other pointers to them. The fix for this is a weak pointer -- a real pointer in every way, except it won't keep an object from being deleted. The programmer needs to make sure any possible pointer-loops have at least one weak pointer.

In other words, Java programmers never have to worry about permanent garbage. Swift programmers do, but only in comoplicated set-ups where they should be smart enough for weak pointers. In exchange, you don't get garbage collection stalls.

Swift has 2 types of weak pointers: unowned ones are left dangling when the target is collected. Weak ones are set to nil. Both can only be used inside classes and so on (which should make sense - there's no reason to want a declared variable to be weak).

Weak is simpler to use. Here next and prev are regular pointers, used the normal way, despite prev being weak:

class Node {
  var val : Int = 0
  var next : Node? = nil
  var weak prev : Node? = nil

We don't oompletely need this. With it, losing the pointer to a chain of nodes will garbage-collect them all. But without we could manually nil the backpointers (the the 1st node would be collected, starting a chain reaction).

unowned is funnier. Syntax is unowned var c : Cat. It must have a value (it's not a nullable) which means it must be assigned in the constructor. If you ever need to break a circular reference, of something you want garbage-collected all at once, and it can't lofically be nil, unowned is great.


A tuple type is types in parens: (Int, String). Tuple literals are anything in parens: var nn = (5,"cow").

Tuples aren't as flexible as in other languages. You can't convert them to arrays or slices, or to longer or shorter tuples. The one shortcut is tuple L-values is assignments:

var a=0, b=0, c=0
(a,b,c) = (4,9,12) // triple assign

Functions can easily return tuples. Put the tuple type as the return type (parens around the type of each slot). You can catch them, with a tuple, or multi-assign with an underscore for don't-care:

// returns min, max and mean of an array:
func minMaxMean(_ A:[Double])->(Double, Double, Double) {
  return (m1, m2, sum/total) // create a tuple

// we only want min and mean:
var low, avg : Double
(low, _ , avg) = minMaxMean(Nums)

Access values of a tuple using a dot and the zero-based index: d.0 and so on. Or, you can add field names when created:

var d = (4,"yaw",6.8) // d is a tuple
d.0; d.1; d.2 // 4 yaw 6.8
var d2 = (feet:2, hands:4) // optionally add names to fields
d2.feet // 2
var d3 : (feet:Int, hands:Int) // these names are now locked to d3
d3 = (2,4); d3.hands // 4

A very nice rule says tuples with different field names can't be assigned, unless you force it with an as. In other words, they act like the structs they are being used to fake:

var p1 = (feet:7, hands:8)
var d2 = (hat:0, coat:0)
d2=p1 // ERROR
d2=p1 as (Int,Int)
d2.hat // 7 - d2 retains its field names

Tuple fields are L-values. d2.hat=3 is legal and fine. And of course the types are fixed. (3,"rat",5.6) is locked as (Int,String,Double).


Like C#, classes are reference types, structs are value types. They work the usual way. Except: this is renamed self, and public/private are file-level. You can read a private field from anywhere else in the file. The constructor is named init:

class Cat {
  var age : Int
  var name : String
  // 2 constructors, named init:
  init() { age=1; name="mittens" }
  init(_ a:Int, _ nm:String) { age=a; name=nm } 
  func isKitten()->Bool { return self.age<3 }

There's no new keyword. Construct and/or allocate with just var c1=Cat().


A quick aside: properties are silly, but C# people think they're cool, so Swift needs them. Then they 1-up C# with another version.

Swift calls normal class fields, properties. C#-style gets-sets are called computed properties. They work in the usual way:

class Angle { 
  var degs=0.0 
  var rads : Double { 
    get { return degs/10 } 
    set(r) { degs=r*10 } // choose any var-name

a.degs=6; a1.rads // 0.6
a1.rads=4; a1.degs // 40

Similar to a setter is a property observer. They run whenever a normal field is changed. willSet runs just before an assign, didSet runs after. willSet has read access to the incoming value. A typical "can't be less than 0" written with didSet:

class Cow
  public var weight : Double { // start of observer for weight
    didSet { if weight<0 { weight=0 } } // runs after the =

You could almost use willSet(nNew) { if nNew<0 { nNew=0 } } except the incoming value is read-only.


All functions are virtual, but require a decorative override keyword. It's a required hint:

class Animal {
  public var age:Int = 1
  public func cost()->Int {return 5}

class Cow : Animal {
  // override is required:
  public override func cost()->Int {return age*2}

The dynamic downcast to a subtype is as?. The ? is to show it can return nil. (it returns the ?-version of the type):

var aa : Animal = Cow()
var cc = aa as? Cow // cc is a nullable Cow?
if cc != nil { ... }

The base class is used with super. Ex: return super.cost()*2. Funny rule: super.init() must come last (actual rule: you have to init all of your variables before calling base init).

It has special interfaces-only classes, which is calls Protocols. As normal, they define only functions (and properties) which users must implement:

// straight-forward interface:
protocol heldItem {
  func handsRequired()->Int  

Inherit from these with commas. Below hotDog inherits from Food and satisfies the protocol heldItem:

class hotDog : Food, heldItem {
  // implement heldItem, Pretty simple:
  func handsRequired()->Int { return bunLen > 5 ? 2 : 1 }

As usual, interfaces are declarable types:

var hh : heldItem = hotDog()
var n = hh.handsRequired()

Container classes

There is NO linked-list class.


Instead of a crude built-in array, then a nicer array class, Swift's built-in array is a fully functional array class. For example arrays can use A.append(8), and x=A.popLast().

Type of an array is [Int]. Official version is Array<Int>. Array literals are written inside brackets: var A:[String] = ["cat","duck","goat"].

var A = [3,6,12] // guess the type is [Int]
var A2 : [Double] = [5,7,8,9]
var W : Array<String> = ["cat","dog","goat"] // declare using long type

Other fun ways to init are by casting from a range, or the constructor:

var A2 = Array(4...7) // [4,5,6,7] cast from range
var A3 = Array(repeating:4, count:3) // [4,4,4] A constructor

Arrays are value types. They copy on assignment. This is very strange. A=B; A[0]=5 will not change B[0]. They use copy-on-write, so it's fine to make a read-only temp, such as A=isFull ? B : C. You're not blowing out the CPU copying an array.


Swift has the standard Map/Hash-table: B["red"]=5. Look-ups return nullables, which is very cute. Instead of checking ContainsKey, just do the look-up - nil means not found.

var CatCute = ["siameese":9, "persian":8, "calico":3] // a dictionary
CatCute["tabby"]=7 // add an item

var c1 = CatCute["persian"] // c1 is an Int?
if c1==nil { no persian }

var c2=CatCute["liger"] ?? 0 // c2 is an Int where 0=not found

The type is [ keyType : valType]. The formal type is Dictionary<key, val>:

var Spellings : [Int : String] = [:] // [:] is empty dictionary 
var DistanceTo : Dictionary<String, Int>


This is standard Perl-style stuff: A[2...5] grabs those 4 items from A. It includes both indexes. Note there are 3 dots. Expressions for the ends are allowed: A[n*2...A.Count-1]. You can also leave out the start, or use < on the end:

var A = [10,11,12,13,14,15,16,17,18,19,20] // array of Int

var s2 = A[4...6] // {14,15,16} This is an ArraySlice<Int>

var s3 = A[...4] // inserts a 0 as first item [10,11,12,13,14]

var s4 = A[...<commaPos] // shortcut for [0...commaPos-1]

Slices are copies. You can't write to an array through a slice, and slices won't track changes to the array.

The weirdest thing: slices use indexes from where they came. They are not 0-based. A[2...5] has indexes from 2 to 5:

var s2 = A[2...5] // {12,13,14,15}
s2[0] // ERROR - out of range
s2[2]; s2[3]; s2[4] // 12, 13, 14

// double-checking: finding the index of 13 also gives 3, not 1:
s2.index(of:13) // 3

They have startIndex and endIndex (1 past the last index). s[s.startIndex+1...s.endIndex-2] is all but the first and last items.

Slices and tuples don't mix (tuples don't mix with anything). Slices aren't arrays, but you can cast them into arrays, using [Int](s). The array indexes will jump back to 0-based (they have to).


While loops work as normal (but parens around the condition are optional, and the body requires {}'s). There's also a "repeat { } while" loop.

while i<3 { print(i); i+=1 }

repeat { i*=2 } while i<100

There's no C-style for(;;) loop. There are the usual foreach-style shortcuts: give a range with dot-dot-dot, or a special range-making function, or a collection to walk through:

for a in 3...9 { print(a) } // 3...9 is a Range
for _ in 1...8 { print("x") } // underscore = don't care about index

for a in stride(from:2, to:20, by:2) // range-creating function

for a in B // B is any collection 
for a in ["cat","duck","owl"] // ditto

For a dictionary, each item is a tuple of the two types, with built-in names key and value. For fun, the type of a stride looks like: var ss : StrideTo<Int> = stride(from: 5, to:33, by:5).

Functions, part 2, anonymous and lambda's and function pointers

Function-pointers are declared using the signature, in the obvious way:

var f1 : (Int, Int)->Int; // function-type variable
f1 = intMax; // assign a function
f1(8,12) // 12

Anonymous functions look like: { signature in body }:

f1 = { (n1:Int, n2:Int)->Int in return n1+n2 } // assign anonymous function
f1(6,2) // 8

// Of course we can make these bodies as long as we need:
f1 = { (x:Int, y:Int)->Int in
    let sum = x+y
    if sum<0 { return 0 }

Anonymous functions don't use call-with-parm-name like regular ones. That's why f1(6,2) is legal. Requiring names would make them nearly useless. Also note the {} is around the entire thing, but no special markers are around the body.

As you'd expect, there are lots of shortcuts. Function variables can omit the function signature if it can be guessed, can leave out the parameter types if they can be guessed, and leave out the return for just a computed answer. Finally, can use $0, $1 ... for parameter names and leave out the signature (assuming we can guess all types):

var f2 = intMax // intMax tells us f2's type

f2 = { (x, y) in return x+y } // omit types in parms, since f2 tells us
f2 = { (x, y) in x+y } // also omit "return" keyword

var chooser : (Int,Int)->Int
chooser = { // omit signature, since chooser tells us:
  if $0>$1 { return $0 }
  return $1

// assume adder is declared, so types are knownL
adder = { $0+$1 } // yes, {$0+$1} defines an anon "add" function

Then one more insane shortcut: if you're calling a function which takes a function as the last input, you can supply it anonymously after the parens:

// apply expects 1 integer and 1 function:
func apply(_ x:Int,  _ f:(Int,Int)->Int) {

apply( 5, { $0*$1 } ) // the normal way to call apply

apply(5) { $0*$1 } // alternate, same call

It goes further and sillier. If a function is your only input, callers can uses that trick, and omit the parens:

func useDuck(_ f:(String)->String)->Int { ... }

useDuck { $0+"y" } // anon function is the input, no parens at all

First-class, Capture

Functions can return functions, and it looks pretty nice. Here's a standard one that adds 2 ints one-at-a-time, enabling you to "lock in" the first:

func Add1By1(_ n1:Int)->(Int)->Int {
  return { (n2:Int)->Int in return n1+n2 } // returns an anon function, using n1

var add6 = Add1By1(6)
var add10 : (Int)->Int = Add1By1(10) // same idea, but explicit signature

add6(4) // 10
add10(4) // 14

The value of n1 was "captured" in add6 and add10. That also works when a member function returns a function. We can capture the object:

class Frog { 
  var age : Int = 0 
  // returns an "increase this frog's age" function:
  // "captures" our real age var
  func addAge(_ n:Int)->()->Int 
    return { () in self.age += n; return self.age } 

var frog1 = Frog()

var f1seesAGhost = frog1.addAge(3) // f1seesAGhost is a ()->Int function

f1seesAGhost() // 3, and frog1 is now 3
var n = f1seesAGhost() // 6, and frog1 is now 6

Template<T> functions/classes

These use the strict "must establish operations are legal" rules. Basic template functions are only allowed to use =. Otherwise can restrict to a subtype using doStuff<T : baseClass> (similar to how C# does it). This searches any list A for value n:

func contains<T:Equatable>(_ A:[T], _ n:T)->Bool { 
  for a in A { 
    if a==n { return true } 
   return false 

In order to use == we were forced to say T must have the pre-defined Equatable Prototype (which most types have). Ick.


They don't caste to int's by default, but you can force them to:

enum waterStates { case ice, water, steam } // these have no values

// add int values:
enum waterState : Int { case ice, water=10, steam } // ice is 0, steam is 11

// use dot-rawValue (casts never work):
var n : Int = waterState.ice.rawValue

The most common surprise is being able to omit the type-name in the constants, like dot-ice.when we can guess it. For example: var ws : waterStates = .ice.

You're also allowed to use any type as the fixed value. This enum uses doubles:

enum waterState : Double {
  case ice = -99.9, // need the spaces here
  steam = 212.0
// waterState.steam.rawValue is 212


This is where it gets really weird. A union is an old C trick to save space by re-using memory. Suppose some types of monkeys need 1 double, while others need a pair of ints. We could make them share those 8 bytes with something like (union {int a,b;} or {double d;}). The only advantage is saving space - we don't have to allocate for the vars we will never use. In Swift, a version of an enum does that. It's not an enum at all - it's a class which holds data.

This union enum can hold 1 int (case cat), or a pair of strings (case dog):

// each SS variable is an int, or 2 strings:
enum SS {
  case cat(Int)
  case dog(String,String) 

When you create it, you decide which case it is, and give the permanent values for the fields:

var s1 = SS.cat(6) // type "cat", so needs 1 int
var s2=SS.dog("arf","ruff") // type "dog", which needs 2 strings

You can only read them with various special syntaxes. This one checks whether s2 is a Dog, and assigns parts to temps e1 and w2:

// if SS-enum s2 is a dog-type, assign w1, w2 and run:
if case .dog(let w1, let w2) = s2 {
  print("dog. Values are "+w1+" and "+w2)

There's also a special way using switch and another with guard. Swift is a very silly language.