Pages

Friday, October 31, 2014

#AACfamily Friday: The end! (for now)

Thank you to the families who sent in pictures of their AAC users this week! (I didn't even manage to get a picture this week.) I have loved seeing, and sharing, photos of AAC users and their families/teachers/therapists. I am going to continue with the AAC Family posts throughout the year on the last Friday of each month---so feel free to email them my way (uncommonfeedback@gmail.com) at any time.

Without further ado . . .


 This is Charlie from Nottingham, UK, showing off his skills using his new Talker with the PODD app . . . 

even on horseback!

 This is Lily Grace, age 5, checking out the sea otters at the aquarium with her papa. Lily Grace uses a PODD book.

 This photo is a selfie of Alyssa Hillary (22), of Yes, That Too with laptop showing desktop and text to speech app. The speakers aren't really showing up (off to the side) but the set-up is a Windows 8 machine with eSpeak and Logitech speakers, for part-time AAC use by a graduate student and TA (that'd be Alyssa, taker of picture and person in the picture).

Joshua, 5 years old, using Speak for Yourself.

Hosea (4) using Speak for Yourself at a pumpkin patch in Florida.

Mirabel, age 3, getting a treat after her audiology appointment. Two modes of communication . . . saying "donut" on the talker followed by signing "please" . . .one very clear request! 
(Which was promptly rewarded!)

Lemmy in Virginia using SFY on his iPad! He's just starting out and still a bit excited (as you can see from that blurry hand haha). He's exploring babble with his mom.
Lemmy has CVI, so his buttons are rainbow colored with high contrast black and white icons.

Felix showing off his talker with its new red cover!

Less than a year ago, Harry (now 3) used a 20 location PODD book to announce his mom's pregnancy on Facebook . . . 

and now he's using the 60 location PODD pageset on an iPad (on the Compass app) to chat while his newborn sister sleeps! (Congratulations on becoming a big brother, Harry!)


    

Thursday, October 30, 2014

False Negatives: Evaluations of Functionally "Nonverbal" Children

My daughter Maya, like many children with special needs, has undergone (too) many evaluations. Her skills and knowledge have been quantified, over and over again, with varying degrees of accuracy. Many parents of children with special needs speak about receiving the evaluation reports with a hint of PTSD in their voices and tears in their eyes. I see their posts in Facebook groups, about a son who has “the cognitive functioning of an 18 month old” or a daughter with “very little receptive language, according to the recent report.” Many have been devastated by these reports.

I am not one of these parents. I have not been devastated by the reports. 

Ok, the first one briefly knocked the wind out of me, but I recovered quickly.

Why not? Perhaps because Maya’s scores have been promising? Or “average”? Or “low average”?

No. No, not because of that. As a matter of fact, her first round of cognitive testing was abysmal. She scored in the bottom 0.4th percentile---meaning that 99.6% of all children her age were more cognitively advanced than Maya. Her receptive language evaluation (at 2.5 years old) estimated that she could produce 1-2 words and that she understood 1-2 words.

They said that she understood 1-2 words. At 2.5 years old.

I wasn’t gutted by the reports because I knew they were worthless. At the time, I thought everyone knew they were worthless---that they were just a means to an end (the end goal being to score “disabled enough” to qualify for services). I thought the results were like She’s in the 0.4th percentile—wink, wink. It wasn’t until I became more immersed in the special needs community, particularly the community of parents who have children who are functionally nonverbal, that I realized something startling---people are believing these numbers.

Parents, you cannot believe these numbers.

Not only will believing the numbers send you down some sort of spiral-of-terrible-feelings, but believing them will change your expectations for your child. The numbers will change what you believe your child is capable of, they will plant seeds of poisonous doubt, and they will corrode your ability to presume competence. If you have a child who doesn’t speak, one of your biggest, constant jobs in life will be to advocate for their people to believe in them . . . so if you start to lower your expectations, others will follow.

Plus, really, the numbers are garbage.

My undergrad degree was in science (zoology), my first master’s degree was in teaching secondary science, and I was a classroom teacher for 8 years. This means that I have both spent a fair amount of time reading scientific journals and that I have experience with assessments of children (and their limitations). I listen to news stories with a skeptic’s ear (who funded that study, what was the sample size, how did they control for these 12 other variables, etc). I see the limitations of a standardized test as quickly as I see the potential data collected. I take all reports with a grain of salt, by nature. But when I saw the way that Maya’s evaluations were done, I realized that it’s not just a grain of salt needed when looking at cognitive assessments (or receptive language assessments) of nonverbal kids, it’s a mountain.

Let’s  think about the tests: If an evaluation is being done on a child who can’t functionally speak, can’t read, and can’t write, answers to questions would basically be limited to two methods of production:  pointing to an image in a field (which item would you use to drink?) or completing performance tasks (such as arranging tiles to match a pattern). There are significant variables in both of these models that can result in scores that are erroneously low---false negatives.

pattern sheets that would be used with tiles

Performance Tasks: It’s easy to see why the performance tasks would result in low scores for many of our kids. The only examples that I clearly remember were about using tiles/blocks to duplicate or continue patterns. For a child like Maya, who struggles to use her fingers to manipulate anything small, this was a very challenging task. She also has apraxia, which means that the messages from her brain to her muscles get derailed---so she could be thinking "move the tile forward" but her hand won't respond. When we got to these questions, Maya refused to try and put her head down.

So, did her inability to engage with the task indicate that:  
            a) she didn’t recognize the pattern
            b) she didn’t understand the question
            c) she didn’t want to try a task that would be simultaneously cognitively taxing and physically                 close-to-impossible

Who knows. But for scoring purposes, that’s a fail. This is a false negative. The absence of a positive response in this situation doesn’t reliably indicate a cognitive limitation, but it will be scored as one.

Pointing at pictures: When I first saw a flipbook that would be used to evaluate receptive language, I thought it looked like a great idea. Maya was probably around 2 years old when I saw the test administered, and while I sat silently and didn’t interfere, I was surprised by how something that at first glance seems fairly straightforward and objective was actually very subjective and unreliable. There are a few things that need to be taken into consideration when thinking about these tests.

1. Personality: Children with complex communication needs often have a fairly passive personality with regards to getting their point across (due in large part to the fact that other people--parents, siblings, teachers--often step in to try to communicate for them). In addition to being passive, our kids learn that sometimes acting clueless is a great way to avoid the task at hand (hours of therapy reinforced this for Maya---act clueless, the therapist will adjust the task or change to a new one). While I can’t speak for all nonspeaking kids, Maya mastered “the blank stare” (in which she would simply stare around like she didn’t hear anyone and had no knowledge of what was transpiring around her) at a young age.  I will never forget watching a new therapist try to cheerfully get Maya’s attention to give her back her car keys: Maya! Maya, over here! Those keys are shiny, right?! I’m going to need the keys back now! Hey, Maya! and then I was all Maya, quit it and give her the keys and she looked up and handed them over. If a child stares blankly at an evaluator and doesn't engage with the test, that is not necessarily indicative of a lack of understanding. It's a false negative

2. Modified abilities lead to modified experiences: One page of the receptive language test had images of art related items (crayons, scissor, glue, etc) and Maya was asked to identify the scissors. Maya didn’t know what scissors were when she was two---she was an only child who spent all of her time at home with me and in home therapy, and she had basically no fine motor ability. She had never seen scissors. This is only one example, but there were several.  Did she know what scissors were? No. Was this indicative of her ability to understand, absorb, and recognize language? Of course not. It was indicative of the fact that her atypical abilities were leading her through an atypical childhood. It was a false negative. (I bet typical kids don’t know what chewy tubes or z-vibes are---Maya would have nailed those ones.)

3. A will, but not a way: For kids with disorders like apraxia or other neurological conditions, there are times when their body does not follow the directions being sent out by their brain. They may be thinking "reach out and touch the scissors, reach out and touch the scissors" but then see their arm reach forward and their hand make contact with the glue. If I stand up and quickly spin in place several times, no amount of me thinking "now run in a straight line!" is going to make that actually happen. It's a physical limitation. Modifications can be made to tests and testing environments to attempt to minimize these effects, but neurological motor planning troubles must be taken into account as a possible source of false negatives.

4. Communication vs. Testing: Children who are functionally nonverbal are often very interesting communicators who use a variety of methods to get their points across:gestures, sights, sounds, meaningful glances, avoidance. One thing that tends to be fairly consistent is the ability to communicate via pointing and pictures: even before Maya used picture cards or communication technology, she could communicate by pointing to the pictures in a book. She pointed to the cow, I would say “A cow! Mooooo!” She pointed to the moon, I would say “That’s the moon! It comes out at night.” Or, outside of books, if she pointed to the refrigerator, I would say “Are you hungry?”

During these tests our children are presented with a field of images and asked a question that has an “obvious answer”. The problem is that presenting a nonspeaking person with a field of images can be akin to saying “Check these out! Which one speaks to you? Which one reminds you of something? Which one do you want to talk about?”

 I'm willing to bet that if I showed this to Maya and said "Which picture shows the car in front of the house?" she would point to the first picture, look pointedly at me, and laugh. Translation: "Mom! There's a car driving into a house! This is ridiculous! . . . Wait, did you ask me a question?"

Example: Maya was 2(ish) and was in the middle of a receptive language test. The doctor flipped the page and said “Which one is the hairbrush?”, and Maya pointed to the toothbrush instead of the hairbrush. Obvious wrong answer, obvious confusing of one "brush" with another, right? No, not right. We had been talking about tooth brushing at home all morning. We had read a book about going to the dentist. We had bought a new toothbrush the day before. When the doctor flipped the page and asked about a hairbrush, Maya was already fixated on the toothbrush (which looked just like hers, by the way) and reaching toward it. As she tapped it, she turned to make eye contact with me. I saw her saying Look! A toothbrush, just like mine! We were just talking about brushing teeth! The doctor saw the wrong answer. A false negative.

Parents, take heart. Professionals, take heed. There is no reliable standardized way to assess the receptive language or cognitive function of a person with complex communication needs. Even now, with a robust language system, Maya has a way of seemingly jumping to unrelated topics that are actually related (but their relation is something that only she would know). Here’s a final example: On Tuesday Maya went to her after-school speech therapy, and this happened:

Therapist: “How was school?”
Maya: (glancing at the fire alarm over his head, in the corner of the room, and then using her device 
            to spell) “f-i-r-e.” (copied from the side of the alarm box)
Therapist: (looking up at the alarm) “Yes, that says ‘fire’ . . . but let’s focus-I asked you how your 
                 day was. Did you have a good day at school?”

There was no way for him to know that her school had a fire drill that morning.

He thought she wasn’t paying attention, but she actually was telling him about her day.


False negative.


Wednesday, October 29, 2014

Throwback Thursday (on Weds): Communication Before Speech


It's the final Throwback Thursday post of AAC Awareness month (but happening on a Wednesday so that today I can write something new for tomorrow). For the grand finale of TBT, here is my master AAC post: Communication Before Speech.

one of my favorite cartoons, by artist Mohamed Ghonemi


This post is a compilation of my best AAC posts, filled with external links, and sprinkled with counterarguments to the anti-AAC comments that I see time and time again (Just use sign language! Have you tried supplementing with fish oil?).

Happy reading:

Communication Before Speech

Tuesday, October 28, 2014

Take-a-Look Tuesday: 2 videos about AAC use by AAC users

Take-a-Look Tuesday brings you two AAC videos made by AAC users--one is informative, one is funny, but also informative, if you don't know any adult AAC users and haven't had a chance to see adults fluently using AAC (since many of our readers have/work with young AAC users.

First up, The Language Stealers. From the video's YouTube description:

As part of the Radiowaves Street Life project, funded by Youth In Action for the British Council, and with additional funding from Co Durham Youth Opportunity Scheme, animator Vivien Peach working with us at Henderson House, Chilton. We made Language Stealers to promote our 'Equality Without Words?' campaign and the language boards of core words we are making. 

Language Stealers is a story of attribution, exposing the real barriers to communication for students with speech and motor impairments as being more to do with the situation they find themselves in than anything to do with their disposition. If nobody gives us a way to say or write the core words, or only gives us lesson nouns to go on class worksheets but no literacy instruction, then how dare they attribute our language delay to lack of ability? 



Next up, the Voice by Choice Comedy Sketch. This makes me smile every time I see it! Set in the context of adult AAC users at a speed dating event, it's also a commentary on some of the challenges that AAC users face: time lag, mishits, and limited voice selections. (You need to know that "bugger" is "used to express annoyance or anger" ---although not as commonly here in the US. Urban dictionary info here.) From the video's YouTube info:
This comedy sketch was written and stars Lee Ridley (aka Lost Voice Guy) and looks at the funny side of going speed dating whe you use a communication aid. 




Enjoy!

 

Monday, October 27, 2014

More Resources Monday: What I've been reading about reading

Like many others, I spend a great deal of time each week reading articles, blog posts, websites, and research that can guide the stuff that I do with Maya. This week I've been thinking a lot about her reading abilities (again) and trying to figure out how to assess her and move forward. One of the best websites out there about literacy for AAC users is this one, from the powerhouse team Janice Light and David McNaughton. That site is my resource share today---with you all and with myself. It's a site that I come back to every few months, poke around, and then think "I need to figure out a plan." So it's hear as a reminder for me, and a site you might want to file away.

I also saw that they have a webcast too, but haven't had time to check it out.

Literacy can be tricky business for functionally nonverbal people (it's difficult for me to imagine sounding out words without being able to use my voice to do so), but it's probably the second most important thing I can work on with Maya---my goals have been "get her a voice, then get her reading."


Friday, October 24, 2014

#AACfamily Friday photo round-up!

Different people, different systems, united by their use of AAC :)


A 3 year old boy learning about the words "hide", "find", "open/shut eyes", and "look" using a Pixon communication book and rice bins with his SLP, Shannon, in New Zealand! (See Shannon's website here!)

Mirabel, age 3, using Speak for Yourself during a shopping trip!

Sammie being silly while discussing Aunt Dawn, who has actually run multiple marathons and is the opposite of lazy (and doesn't eat pepperoni, either)!

Gathering the devices, joysticks, switches and stands we are using in a project we're doing on best practices for apps for communication! (from Sandra, AAC technician at DART, Western Sweden's Centre for AT and AAC)

3 year old Harry in Australia practicing for Show and Tell at childcare using Go Talk Now!

AAC up and ready to communicate with while trying to study in the college library . . . 

and the back of her Lifeproof case modified with stickers!

 Photos of Isaac, age 4, using TouchChat with Word Power to talk with his cousin while at the playground . . . 

and talking about colors with his mom!

Nicole (25) picking out her pumpkin at the pumpkin farm - wearing her talker, of course!

Tia learning about fall field and harvest with her Boardmaker board in a communication binder . .

and choosing which song she would like to watch/listen via speaking dynamically! (Also, here is there Facebook group:  Nadomestna komunikacija/neverbalno sporazumevanje)

The first time that Felix told his mom that he loved her! (she had already gotten him more cheese)

Aidan uses GoTalk to tell his dad about his day at school! (Check out his mom's blog here)

James, 4 years old, using a Tobii C-12 via auditory scanning at the pumpkin patch! (Fort Worth, TX)

Here is Nathaniel the first time he saw his talker. Someday we'll learn all those words! (Check out his mom's blog here)

 Tom is an 11 year old multi-modal communicator who is also using Speak for Yourself . . . 

and wanted his dad to take him to buy some hot chips!

Reese has been around devices since he was a puppy. When we were developing and testing Speak for Yourself, I would use it to ask him if he wanted to eat or go for a walk, so he learned to listen to AAC. Now he comes over anytime I'm working with it...in case there's something in it for him! (From Heidi, of the Speak for Yourself team)

Daniel said the first sentence as soon as he saw Elmo on tv, and the second when a Geico commercial came on (geckos totally look like frogs)! 

Abby at Camp Communicate in Maine, talking about pirates!

This is Lily Grace with her mama, grandma, and PODD communication book at the beach!

Josh and his friend have been in the same class since they were 3, and sometimes they even use each other's AAC! 

What's better than one communication device? 3!

Jack teaching his grandpa, Pa, how to use his iPad!

This is an old picture of ours (back when we used a full size iPad and a keyguard). I told Maya that it looked "a little sunny" and she replied "big sunny" :)

Thanks for all of the contributions---next week is the final Friday in October! For the final #AACfamily during AAC Awareness month, you can send in any AAC related photo. Send photos to uncommonfeedback@gmail.com by Thursday night at 8pm EST. 


Thursday, October 23, 2014

TBT: The Limitations of Sign Language for Children With Speech Delays



I was prompted to write this post by the many conversations that I've had with parents who say that they don't want to pursue AAC but that they are working (usually working hard) to teach their child sign language. I put this post together to have an easy link to share about some points to consider before choosing sign language as a child's primary method of communication.

Will stars in a tiny video clip that really drives the point home :) (Ironically, in the clip he's discussing watching a sign language video!)

It should be noted, of course, that I think signing is a great communication tool for a child to have as part of their total communication toolbox---it just shouldn't be taught at the exclusion of AAC that would let them independently communicate with any communication partner about any topic.

Wednesday, October 22, 2014

One Size (with some tailoring) Fits Many: Key Considerations in AAC Selection

“There is no one-size-fits-all approach to AAC.”

In my part-time non-paying job (AAC advocating online) I’ve heard this a lot. I’ve said it a few times myself, but always halfheartedly, because I’ll tell you a secret: while one size doesn’t fit all, I believe that there are nearly universal principles that must be considered and incorporated when setting up an AAC system for any person.

Nearly universal. As in, applying to nearly all people. Not one-size-fits-all, I guess, but one-size-(appropriately tailored)-fits-many.  I’m not going to suggest a specific device/app/system, but I’m going to provide you with organizational/planning points that you may want to carefully consider as you purchase a system, or configure and personalize a system that you already own.

Please note that I am not suggesting you overhaul your system overnight. If you feel like your current AAC system is a great fit and working well, this post probably isn't for you.  I don’t suggest switching things up (unless in reading this, you realize that even their “proficiency” leaves them unable to communicate many things).

An AAC user deserves a system that will provide them with the maximum amount of language (referred to as SNUG---spontaneous novel utterance generation—the ability to put together an utterance that says exactly what they want it to say) with the minimum amount of time and effort. They need to be able to say exactly what they want to say, to whomever they want to talk to, whenever they want . . . and in a way that can happen as quickly and automatically as possible.

In order to help make this happen, here are the things that you need to think through:

1. Context-based communication pages are not the best idea. A context-based page groups together “all of the words” that you would need to communicate in a certain context. For example, an art page might contain these words (with icons):

I
paint
crayon
glitter
yellow
want
you
draw
marker
pencil
green
need
more
cut
brush
red
blue
like
all done
glue
scissors
orange
purple
bathroom

Why this is used: For someone new to AAC, this seems to make sense. Think of words that relate to art, put them all on a board, pull the board out during art. This board allows the user to request certain colors or items, to comment on the colors, to indicate what they like, to direct their communication partner “you draw,” and to also have a few commonly used words (more, all done, bathroom).

Limitations: Spontaneous novel utterance generation is low. Sure, the user can request red paint. But they can’t tell anyone that what they painted is a red school bus. They can’t ask to mix colors together, or request that the teacher draw a space shuttle so that they can paint stars next to it. They can’t point to their scribble and say “That’s my dad!” or tell their teacher “My pencil isn’t sharp enough” or “I don’t like the paint under my fingernails.” They can’t say “I’m thirsty!” They can’t comment on a friend’s painting or ask questions about what someone is drawing. They can’t say that they stopped painting because “my belly hurts” or “I feel tired” or “I don’t want to sit next to Sam because he keeps sticking his tongue out at me when you’re not looking.”

All of the words. When you wake up in the morning, you have the ability to use all of the words that you’ve ever heard. Your AAC user needs to be on a system that gives them access to all of the words, all of the time---not just food words at the dinner table and art words at the art table. 

2. Motor planning needs to be a part of the system. It just needs to be.  “Motor planning” is what happens when you need to make your body do something, like step up onto a ladder. You think about what you want to do (lift your foot to step up) and (very, very quickly) your nerves and muscles zap information around: contract this muscle, relax that muscle, shift your balance back a little, change the angle of your foot, lift, lean forward, move at hip and knee joint, lower foot down at the appropriate speed, contact!. The more that you execute a given motor plan, the better your body does it---it becomes automatic. A good example of this is typing the password on your computer. When you have a new password, you type it slowly for a few days, as your fingers learn the motor pattern of the new word. If you have an old password your fingers probably do it automatically, zipping through the letters without you even paying attention. That’s motor planning. (I wrote a whole post on it, with pictures and videos, here.)

Many individuals who are in need of AAC systems also have issues with muscle movements and motor planning, so the ability to organize a system in a way that allows for movements to access words to become automatic is essential. Imagine how much time and cognitive energy can be saved when an AAC  user’s hand just knows what little path to tap to say the word “go” instead of having to scan pages or search through folders because that word might be in a different spot.

AAC systems that honored and fostered motor planning principles used to be hard to find (limited to one company, really), but more and more companies are realizing how essential it is and attempting to allow their products to be configured in a way that will allow for the automaticity that following a motor plan to find a word can bring.

Here are some examples of when motor planning is not incorporated:


Motor Planning Foil #1: Movement of words with increasing vocabulary size.

GRID 1:

red


orange

yellow

green


blue

purple

black


brown

grey

GRID 2:

red


orange

yellow

green

blue


purple

black

brown

grey


white

paint

brush


pencil


glue

scissors

glitter

In Grid 1, each word has a spot. The user learns the motor plan to say these words. Then, the vocabulary increases (Grid 2) and the user is left having to scan and search for words (or images). And every time the vocabulary increases, the words move. Not good.

Ideally, a grid should have as many buttons as possible (a lot---even for people with fine motor challenges)  and the ones that aren’t currently in use could be blacked out (which will make the motor learning and target hitting more possible for people with motor challenges as well), reserved to hold more words later.

As a simple example, this would be a revision of Grid 1, which holds all of the same words but can grow to become Grid 2 without changing the motor plan for the original words:


red


orange

yellow

green

blue


purple

black

brown

grey


          
















-Motor Planning Foil #2: Predictive Text.  Predictive text walks a user through a sentence by “predicting” the type of things that a user might want to say. One issue with apps that use predictive text is that the app is not fully able to predict the types of spontaneous, novel things a person might want to say.


                                                                                         to
                                                                       want          (people)
                                                          I   à      like   à     (toys)
                                                                       feel            (food)


For example, if you type the choices like, want, and feel might appear. Tapping like might lead you to categories toys, food, people, and probably the word to, which could connect to other verbs (to eat, to drink, etc). But what if the user wanted to say “I like jumping on my bed” or “I like the smell of farts” (I mean, you never know).  That’s one problem with predictive text.

Another problem with predicative text is that, because the screen reloads after each selection, everything is always moving. If the word moves around (in the upper left corner if you get to it this way, but down at the bottom if you get to it a different way), then the user can never internalize the motor plan to get to a word, and they have to spend a lot of time scanning and searching. It’s a waste of energy.  The last issue with predictive text is that it's sometimes difficult to find a word if you’re not walked to it (if you wanted to start a sentence with the word caterpillar, it might be dicey). There are typically “category” folders in these systems that you can also access, but they may have many layers (which means many button pushes to find a word) or require scrolling (which wastes time and can present a challenge to those with fine motor impairment).

-Motor Planning Foil #3: Putting one word on a bunch of screens. When you say a word with your mouth (like “cut”) your muscles always move in exactly the same way to produce that word, no matter what the context is. You can “cut an apple” or “cut down a tree” or “cut that deck of cards”, but cut = cut = cut. If your system is made of a lot of screens that could function as context boards, then you may have one word in a lot of places (“want” on the eat screen, the drink screen, the art screen, the toy screen, etc). This is a lot of clutter and potential confusion (especially if the screens don’t have duplicate words in the same exact place, which should be the minimum requirement for duplicating words within a system).


3. Minimal work for maximum SNUG An AAC user shouldn’t have to navigate through multiple layers of folders to find a word. One of my favorite sentences that Maya ever said on her device was “Lightning---scary!” as the thunder cracked outside of our window. I loved it because it was exactly what she wanted to say, and she was able to quickly get to those two words, lightning and scary, which could easily have been buried in layers in another system. In one system the default to find the word lightning would require this navigation:

(more) --- (school) --- (weather) --- (more) --- lightning

No user should have to think their way through multiple categories to find a word. By the time Maya reached “lightning” in that example (if she managed to get there without giving up) the moment would have been long gone. And that’s only a two word sentence! It would take a very long time to say something more substantial. Systems need to be organized in a way that will minimize the amount of navigation that it will take to get to each word. Each additional step of navigation takes extra time and cognitive energy. 

A word of warning, though: notice that I said the goal is maximum spontaneous novel utterance generation----not maximum words. Programming each button to say several words (“I want” “I like” “I see” “I like that” “Pick up” “put down” etc) is not actually increasing language. It’s just relabeling buttons. It makes novel utterances more tricky (what about “You like” or “He likes” or “Mom likes”---do they all get buttons too?).  It might make it faster and easier to say simple things, but this approach will clutter the field (or fill more folders to tap through) and make it more difficult to combine individual words and build new sentences. This multi-word-button approach will also make it difficult to approach instruction about grammar, verb tenses, syntax, etc in a simple, direct way. Another consideration is that for children with some diagnoses, speech segmentation (the spaces where one word stops and a new word begins) is a challenge. This is part of why some children are prone to scripting---using entire sentences “Go to the store now?”  as one “word”, building sentences like “go to the store now yesterday.” Using a single-word vocabulary allows the users to see a word as the smallest unit of language and learn how to manipulate those pieces.

4. There is no “starter” AAC.   Do not waste your time (or money) on a “starter” AAC system, one that you plan to use for a little while until they are ready for a robust system. This is your child’s (or client’s) ability to communicate---it is not fair to waste months of their life teaching them a system that you fully intend on abandoning when/if they become “good enough” at using it to prove that they’re ready for something else. Imagine the uprising that would happen in an office setting if months were spending training on and incorporating a major software system, and just when the employees had all mastered it the boss called them in to congratulate them and introduce them to the “real system that we’re going to use, now that you’ve all proved your competency.” Unfair.

You need to find a system that can grow with the user---and you need to evaluate how the growth will happen in terms of  considerations mentioned above.

  • Will they have access to all of the words easily as it grows? 
  • Will the motor planning remain the same, or very very close to the same? 
  • Will words become buried in layers of folders as the vocabulary increases? 
  • Will predictive text make it difficult to say exactly what they want to say?
  • Can the system hold a massive amount of words, and does it have a solid keyboard to support emergent and conventional spellers who may want to start typing their words?

In my opinion, these are core points to consider as you plan and implement any AAC system. I believe that a system that is organized in a way that takes the above into consideration will offer maximum success and maximum SNUG. If you’ve landed on this page because you’re in the process of selecting an AAC system for your child, I hope that you’ll think through each of these points as you evaluate the options----each one of these things became a bump in the road for us at some point, and I wish that we could get back the months that we wasted figuring all of this out. I wish that we could have heard Maya’s thoughts when she was younger, rather than fumbling through several options that let her effortlessly request milk, or a certain color crayon, and not say much else.




PS: And now, for my preemptive defenses, because I anticipate push back on this:
1. My kid's system doesn't do any of that, and it's perfect for him! That's great! I'm so glad your kid has something that works well for them. This post isn't for you, then, because I would never suggest that you switch a system that is working well for you. However, for most new users, I believe the points above are essential considerations that will maximize success and SNUG.

2. What about kids with vision issues who would struggle with small buttons? Good question, and one that I don't know the answer to. I do believe that the motor planning is especially important for those with vision issues, who should be forced to use their vision to scan for words even less than a user with normal sight. I've seen great use of high contrast symbols and heard about success with visual/tactile markers in the field. When Maya was at the eye doctor she had her eyes dilated, and she was using her talker after the dilation. The doctor said to me: "You know that she can't see those pictures anymore, right? They are just bright blurry squares." I had no idea, but it made sense that the motor plans she had learned for each word, combined with the vague coloring of the icons she knew, was enough to keep her going. (And I'm certainly not saying that my kid's post-dilation blurry vision  is the same as someone with a serious visual impairment . . . it was just interesting for me to see her motor planning in action when her vision was reduced.) I don't know how to create systems for all users, but I believe that the above features can be considered when planning any system.

3. What about kids who struggle to hit small targets? They might need a few words on one page. Again, good question, and one that I don't know the answer to. Access is tricky. One thing that I know is that many kids who seem unable to hit a small target (or even isolate a finger) can often develop that ability with a keyguard, or fingerless glove, combined with creative positioning of the device and some stabilizing hand-under-hand (or wrist) support. I've seen an AAC user who uses his toe for direct access. If I were the AAC user and I had the following two choices: say exactly what I want to say, but it will take longer and more effort or say a few things (eat, drink, more, stop) really fast . . . well, I would choose the former. I don't know how to create systems for all users (and I'm not even going to wander into the waters of non-direct access here), but I believe that the above features can be considered when planning any system.

4. Who are you, anyway? You aren't even qualified to make a list of what needs to be considered.  If a professional's response to this list is to defensively point out that I am not a professional, rather than to consider these points with an open mind (and then provide well-argued critiques, if necessary), well . . . I would have reservations about working with that professional. Feel free to come back with a point-counterpoint rebuttal. I welcome debate and feel like we can all learn and grow from it. (If you ended up here following links and are sincerely wondering who I am, I'm an AAC parent who has spent a few years doing a lot of reading, research, immersion, and training about all things AAC. And I'm in school now to become an SLP.)