Friday, October 24, 2014

#AACfamily Friday photo round-up!

Different people, different systems, united by their use of AAC :)


A 3 year old boy learning about the words "hide", "find", "open/shut eyes", and "look" using a Pixon communication book and rice bins with his SLP, Shannon, in New Zealand! (See Shannon's website here!)

Mirabel, age 3, using Speak for Yourself during a shopping trip!

Sammie being silly while discussing Aunt Dawn, who has actually run multiple marathons and is the opposite of lazy (and doesn't eat pepperoni, either)!

Gathering the devices, joysticks, switches and stands we are using in a project we're doing on best practices for apps for communication! (from Sandra, AAC technician at DART, Western Sweden's Centre for AT and AAC)

3 year old Harry in Australia practicing for Show and Tell at childcare using Go Talk Now!

AAC up and ready to communicate with while trying to study in the college library . . . 

and the back of her Lifeproof case modified with stickers!

 Photos of Isaac, age 4, using TouchChat with Word Power to talk with his cousin while at the playground . . . 

and talking about colors with his mom!

Nicole (25) picking out her pumpkin at the pumpkin farm - wearing her talker, of course!

Tia learning about fall field and harvest with her Boardmaker board in a communication binder . .

and choosing which song she would like to watch/listen via speaking dynamically! (Also, here is there Facebook group:  Nadomestna komunikacija/neverbalno sporazumevanje)

The first time that Felix told his mom that he loved her! (she had already gotten him more cheese)

Aidan uses GoTalk to tell his dad about his day at school! (Check out his mom's blog here)

James, 4 years old, using a Tobii C-12 via auditory scanning at the pumpkin patch! (Fort Worth, TX)

Here is Nathaniel the first time he saw his talker. Someday we'll learn all those words! (Check out his mom's blog here)

 Tom is an 11 year old multi-modal communicator who is also using Speak for Yourself . . . 

and wanted his dad to take him to buy some hot chips!

Reese has been around devices since he was a puppy. When we were developing and testing Speak for Yourself, I would use it to ask him if he wanted to eat or go for a walk, so he learned to listen to AAC. Now he comes over anytime I'm working with it...in case there's something in it for him! (From Heidi, of the Speak for Yourself team)

Daniel said the first sentence as soon as he saw Elmo on tv, and the second when a Geico commercial came on (geckos totally look like frogs)! 

Abby at Camp Communicate in Maine, talking about pirates!

This is Lily Grace with her mama, grandma, and PODD communication book at the beach!

Josh and his friend have been in the same class since they were 3, and sometimes they even use each other's AAC! 

What's better than one communication device? 3!

Jack teaching his grandpa, Pa, how to use his iPad!

This is an old picture of ours (back when we used a full size iPad and a keyguard). I told Maya that it looked "a little sunny" and she replied "big sunny" :)

Thanks for all of the contributions---next week is the final Friday in October! For the final #AACfamily during AAC Awareness month, you can send in any AAC related photo. Send photos to uncommonfeedback@gmail.com by Thursday night at 8pm EST. 


Thursday, October 23, 2014

TBT: The Limitations of Sign Language for Children With Speech Delays



I was prompted to write this post by the many conversations that I've had with parents who say that they don't want to pursue AAC but that they are working (usually working hard) to teach their child sign language. I put this post together to have an easy link to share about some points to consider before choosing sign language as a child's primary method of communication.

Will stars in a tiny video clip that really drives the point home :) (Ironically, in the clip he's discussing watching a sign language video!)

It should be noted, of course, that I think signing is a great communication tool for a child to have as part of their total communication toolbox---it just shouldn't be taught at the exclusion of AAC that would let them independently communicate with any communication partner about any topic.

Wednesday, October 22, 2014

One Size (with some tailoring) Fits Many: Key Considerations in AAC Selection

“There is no one-size-fits-all approach to AAC.”

In my part-time non-paying job (AAC advocating online) I’ve heard this a lot. I’ve said it a few times myself, but always halfheartedly, because I’ll tell you a secret: while one size doesn’t fit all, I believe that there are nearly universal principles that must be considered and incorporated when setting up an AAC system for any person.

Nearly universal. As in, applying to nearly all people. Not one-size-fits-all, I guess, but one-size-(appropriately tailored)-fits-many.  I’m not going to suggest a specific device/app/system, but I’m going to provide you with organizational/planning points that you may want to carefully consider as you purchase a system, or configure and personalize a system that you already own.

Please note that I am not suggesting you overhaul your system overnight. If you feel like your current AAC system is a great fit and working well, this post probably isn't for you.  I don’t suggest switching things up (unless in reading this, you realize that even their “proficiency” leaves them unable to communicate many things).

An AAC user deserves a system that will provide them with the maximum amount of language (referred to as SNUG---spontaneous novel utterance generation—the ability to put together an utterance that says exactly what they want it to say) with the minimum amount of time and effort. They need to be able to say exactly what they want to say, to whomever they want to talk to, whenever they want . . . and in a way that can happen as quickly and automatically as possible.

In order to help make this happen, here are the things that you need to think through:

1. Context-based communication pages are not the best idea. A context-based page groups together “all of the words” that you would need to communicate in a certain context. For example, an art page might contain these words (with icons):

I
paint
crayon
glitter
yellow
want
you
draw
marker
pencil
green
need
more
cut
brush
red
blue
like
all done
glue
scissors
orange
purple
bathroom

Why this is used: For someone new to AAC, this seems to make sense. Think of words that relate to art, put them all on a board, pull the board out during art. This board allows the user to request certain colors or items, to comment on the colors, to indicate what they like, to direct their communication partner “you draw,” and to also have a few commonly used words (more, all done, bathroom).

Limitations: Spontaneous novel utterance generation is low. Sure, the user can request red paint. But they can’t tell anyone that what they painted is a red school bus. They can’t ask to mix colors together, or request that the teacher draw a space shuttle so that they can paint stars next to it. They can’t point to their scribble and say “That’s my dad!” or tell their teacher “My pencil isn’t sharp enough” or “I don’t like the paint under my fingernails.” They can’t say “I’m thirsty!” They can’t comment on a friend’s painting or ask questions about what someone is drawing. They can’t say that they stopped painting because “my belly hurts” or “I feel tired” or “I don’t want to sit next to Sam because he keeps sticking his tongue out at me when you’re not looking.”

All of the words. When you wake up in the morning, you have the ability to use all of the words that you’ve ever heard. Your AAC user needs to be on a system that gives them access to all of the words, all of the time---not just food words at the dinner table and art words at the art table. 

2. Motor planning needs to be a part of the system. It just needs to be.  “Motor planning” is what happens when you need to make your body do something, like step up onto a ladder. You think about what you want to do (lift your foot to step up) and (very, very quickly) your nerves and muscles zap information around: contract this muscle, relax that muscle, shift your balance back a little, change the angle of your foot, lift, lean forward, move at hip and knee joint, lower foot down at the appropriate speed, contact!. The more that you execute a given motor plan, the better your body does it---it becomes automatic. A good example of this is typing the password on your computer. When you have a new password, you type it slowly for a few days, as your fingers learn the motor pattern of the new word. If you have an old password your fingers probably do it automatically, zipping through the letters without you even paying attention. That’s motor planning. (I wrote a whole post on it, with pictures and videos, here.)

Many individuals who are in need of AAC systems also have issues with muscle movements and motor planning, so the ability to organize a system in a way that allows for movements to access words to become automatic is essential. Imagine how much time and cognitive energy can be saved when an AAC  user’s hand just knows what little path to tap to say the word “go” instead of having to scan pages or search through folders because that word might be in a different spot.

AAC systems that honored and fostered motor planning principles used to be hard to find (limited to one company, really), but more and more companies are realizing how essential it is and attempting to allow their products to be configured in a way that will allow for the automaticity that following a motor plan to find a word can bring.

Here are some examples of when motor planning is not incorporated:


Motor Planning Foil #1: Movement of words with increasing vocabulary size.

GRID 1:

red


orange

yellow

green


blue

purple

black


brown

grey

GRID 2:

red


orange

yellow

green

blue


purple

black

brown

grey


white

paint

brush


pencil


glue

scissors

glitter

In Grid 1, each word has a spot. The user learns the motor plan to say these words. Then, the vocabulary increases (Grid 2) and the user is left having to scan and search for words (or images). And every time the vocabulary increases, the words move. Not good.

Ideally, a grid should have as many buttons as possible (a lot---even for people with fine motor challenges)  and the ones that aren’t currently in use could be blacked out (which will make the motor learning and target hitting more possible for people with motor challenges as well), reserved to hold more words later.

As a simple example, this would be a revision of Grid 1, which holds all of the same words but can grow to become Grid 2 without changing the motor plan for the original words:


red


orange

yellow

green

blue


purple

black

brown

grey


          
















-Motor Planning Foil #2: Predictive Text.  Predictive text walks a user through a sentence by “predicting” the type of things that a user might want to say. One issue with apps that use predictive text is that the app is not fully able to predict the types of spontaneous, novel things a person might want to say.


                                                                                         to
                                                                       want          (people)
                                                          I   à      like   à     (toys)
                                                                       feel            (food)


For example, if you type the choices like, want, and feel might appear. Tapping like might lead you to categories toys, food, people, and probably the word to, which could connect to other verbs (to eat, to drink, etc). But what if the user wanted to say “I like jumping on my bed” or “I like the smell of farts” (I mean, you never know).  That’s one problem with predictive text.

Another problem with predicative text is that, because the screen reloads after each selection, everything is always moving. If the word moves around (in the upper left corner if you get to it this way, but down at the bottom if you get to it a different way), then the user can never internalize the motor plan to get to a word, and they have to spend a lot of time scanning and searching. It’s a waste of energy.  The last issue with predictive text is that it's sometimes difficult to find a word if you’re not walked to it (if you wanted to start a sentence with the word caterpillar, it might be dicey). There are typically “category” folders in these systems that you can also access, but they may have many layers (which means many button pushes to find a word) or require scrolling (which wastes time and can present a challenge to those with fine motor impairment).

-Motor Planning Foil #3: Putting one word on a bunch of screens. When you say a word with your mouth (like “cut”) your muscles always move in exactly the same way to produce that word, no matter what the context is. You can “cut an apple” or “cut down a tree” or “cut that deck of cards”, but cut = cut = cut. If your system is made of a lot of screens that could function as context boards, then you may have one word in a lot of places (“want” on the eat screen, the drink screen, the art screen, the toy screen, etc). This is a lot of clutter and potential confusion (especially if the screens don’t have duplicate words in the same exact place, which should be the minimum requirement for duplicating words within a system).


3. Minimal work for maximum SNUG An AAC user shouldn’t have to navigate through multiple layers of folders to find a word. One of my favorite sentences that Maya ever said on her device was “Lightening---scary!” as the thunder cracked outside of our window. I loved it because it was exactly what she wanted to say, and she was able to quickly get to those two words, lightening and scary, which could easily have been buried in layers in another system. In one system the default to find the word lightening would require this navigation:

(more) --- (school) --- (weather) --- (more) --- lightening

No user should have to think their way through multiple categories to find a word. By the time Maya reached “lightening” in that example (if she managed to get there without giving up) the moment would have been long gone. And that’s only a two word sentence! It would take a very long time to say something more substantial. Systems need to be organized in a way that will minimize the amount of navigation that it will take to get to each word. Each additional step of navigation takes extra time and cognitive energy. 

A word of warning, though: notice that I said the goal is maximum spontaneous novel utterance generation----not maximum words. Programming each button to say several words (“I want” “I like” “I see” “I like that” “Pick up” “put down” etc) is not actually increasing language. It’s just relabeling buttons. It makes novel utterances more tricky (what about “You like” or “He likes” or “Mom likes”---do they all get buttons too?).  It might make it faster and easier to say simple things, but this approach will clutter the field (or fill more folders to tap through) and make it more difficult to combine individual words and build new sentences. This multi-word-button approach will also make it difficult to approach instruction about grammar, verb tenses, syntax, etc in a simple, direct way. Another consideration is that for children with some diagnoses, speech segmentation (the spaces where one word stops and a new word begins) is a challenge. This is part of why some children are prone to scripting---using entire sentences “Go to the store now?”  as one “word”, building sentences like “go to the store now yesterday.” Using a single-word vocabulary allows the users to see a word as the smallest unit of language and learn how to manipulate those pieces.

4. There is no “starter” AAC.   Do not waste your time (or money) on a “starter” AAC system, one that you plan to use for a little while until they are ready for a robust system. This is your child’s (or client’s) ability to communicate---it is not fair to waste months of their life teaching them a system that you fully intend on abandoning when/if they become “good enough” at using it to prove that they’re ready for something else. Imagine the uprising that would happen in an office setting if months were spending training on and incorporating a major software system, and just when the employees had all mastered it the boss called them in to congratulate them and introduce them to the “real system that we’re going to use, now that you’ve all proved your competency.” Unfair.

You need to find a system that can grow with the user---and you need to evaluate how the growth will happen in terms of  considerations mentioned above.

  • Will they have access to all of the words easily as it grows? 
  • Will the motor planning remain the same, or very very close to the same? 
  • Will words become buried in layers of folders as the vocabulary increases? 
  • Will predictive text make it difficult to say exactly what they want to say?
  • Can the system hold a massive amount of words, and does it have a solid keyboard to support emergent and conventional spellers who may want to start typing their words?

In my opinion, these are core points to consider as you plan and implement any AAC system. I believe that a system that is organized in a way that takes the above into consideration will offer maximum success and maximum SNUG. If you’ve landed on this page because you’re in the process of selecting an AAC system for your child, I hope that you’ll think through each of these points as you evaluate the options----each one of these things became a bump in the road for us at some point, and I wish that we could get back the months that we wasted figuring all of this out. I wish that we could have heard Maya’s thoughts when she was younger, rather than fumbling through several options that let her effortlessly request milk, or a certain color crayon, and not say much else.




PS: And now, for my preemptive defenses, because I anticipate push back on this:
1. My kid's system doesn't do any of that, and it's perfect for him! That's great! I'm so glad your kid has something that works well for them. This post isn't for you, then, because I would never suggest that you switch a system that is working well for you. However, for most new users, I believe the points above are essential considerations that will maximize success and SNUG.

2. What about kids with vision issues who would struggle with small buttons? Good question, and one that I don't know the answer to. I do believe that the motor planning is especially important for those with vision issues, who should be forced to use their vision to scan for words even less than a user with normal sight. I've seen great use of high contrast symbols and heard about success with visual/tactile markers in the field. When Maya was at the eye doctor she had her eyes dilated, and she was using her talker after the dilation. The doctor said to me: "You know that she can't see those pictures anymore, right? They are just bright blurry squares." I had no idea, but it made sense that the motor plans she had learned for each word, combined with the vague coloring of the icons she knew, was enough to keep her going. (And I'm certainly not saying that my kid's post-dilation blurry vision  is the same as someone with a serious visual impairment . . . it was just interesting for me to see her motor planning in action when her vision was reduced.) I don't know how to create systems for all users, but I believe that the above features can be considered when planning any system.

3. What about kids who struggle to hit small targets? They might need a few words on one page. Again, good question, and one that I don't know the answer to. Access is tricky. One thing that I know is that many kids who seem unable to hit a small target (or even isolate a finger) can often develop that ability with a keyguard, or fingerless glove, combined with creative positioning of the device and some stabilizing hand-under-hand (or wrist) support. I've seen an AAC user who uses his toe for direct access. If I were the AAC user and I had the following two choices: say exactly what I want to say, but it will take longer and more effort or say a few things (eat, drink, more, stop) really fast . . . well, I would choose the former. I don't know how to create systems for all users (and I'm not even going to wander into the waters of non-direct access here), but I believe that the above features can be considered when planning any system.

4. Who are you, anyway? You aren't even qualified to make a list of what needs to be considered.  If a professional's response to this list is to defensively point out that I am not a professional, rather than to consider these points with an open mind (and then provide well-argued critiques, if necessary), well . . . I would have reservations about working with that professional. Feel free to come back with a point-counterpoint rebuttal. I welcome debate and feel like we can all learn and grow from it. (If you ended up here following links and are sincerely wondering who I am, I'm an AAC parent who has spent a few years doing a lot of reading, research, immersion, and training about all things AAC. And I'm in school now to become an SLP.)




Tuesday, October 21, 2014

Take-a-Look Tuesday: AAC user videos

Both of these videos, which I found during my search for an effective AAC system for Maya, shaped my perception of AAC users . . . in very good, but very different ways. The first video is short (and silly!) and the second is actually a full-length documentary (which I shared several years ago, so if you're a long time reader it may seem familiar).

The first video is of a boy talking with his brother and sister (they're triplets!) at dinner (using a Springboard Lite from PRC). I remember first seeing this video and loving it---it was one of the first times I saw AAC included in home life like it was no big deal. Also, you kind of can't help but laugh at the typical kid talk :)




The second video is the movie "Only God Could Hear Me." This documentary follows the lives of four adult AAC users and simultaneously tells the story of the creation of the Minspeak AAC language. The opening scene of this movie totally blew me away, simultaneously revealing and destroying assumptions that I was unaware I had about people who use AAC. Chris Klein, who is featured in that opening scene (and throughout the movie) is now the president of USSAAC, the United States Society for Augmentative and Alternative Communication. If you don't have time for more, at least watch the first four minutes and twelve seconds.




Happy viewing!  


 

Monday, October 20, 2014

More Resources Monday: Pinterest



Today's resource post is a DIY resource: Pinterest!

The reason it's DIY is because, frankly, Pinterest is a bit foreign and overwhelming to me. So many links! So many pictures! At the same time, it's a great place to find, collect, and organize resources that you may want to refer back to or save for later. (Sitting here and thinking about Pinterest is causing me to think that I should really use my account more.)

I'm only going to link to two Pinterest resources, because these two AAC pinners are certainly enough to keep you busy and reading for at least this week:

PrAACtical AAC: See their boards here

Lauren S. Enders, MA, CCC-SLP: See her boards here (there's a lot of stuff here---scroll down until you see a cluster labeled "AAC" if you're not interested in other tech/speech boards)

Happy reading!


 

Friday, October 17, 2014

#AACfamily Friday: AAC Users & Communication Partners

Welcome to #AACfamily Friday! This week we had a drop in submissions---mid-month slump? (Honestly, it was kind of appreciated during a rather stressful week on my end.) Here are some awesome AAC users with their communication partners!


Felix showing his sister that we programmed "feather" into Speak for Yourself!

3 year old Harry from Australia and his dad playing and having a chat first thing in the morning!

Olivia (who will be 4 in two weeks) using Speak for Yourself on her iPad mini with her brothers, Michael and Jayden, and her sister, Carlie!

A whole AAC family! Jess's parents each went voiceless for several days in October to get the true feel for being an AAC user---you can read about the experience on her mom's blog

James (4) learning to communicate using a Tobii C-12 via auditory scanning with his SLP, Landon. They are playing a fishing game, talking, having a good time!

Maya and Will, chatting in the stroller!

Daniel uses Speak for Yourself to talk to his grandparents, who live far away!

Hosea joking with his sister Avelina by saying "pants" and gesturing to his head!


Thanks for the contributions-I look forward to seeing these pictures each week! For next week's #AACfamily Friday post, anything goes: any AAC related picture is game. More information is more fun, so try to include your location and the name of the device/app. Email submissions to me (by next Thursday, 8pm EST) at: uncommonfeedback@gmail.com 

   

Thursday, October 16, 2014

Throwback Thursday: I Am Not A Mindreader (And Neither Are You)




This is for all of the parents who think "We really don't need AAC at home, because I can tell what he wants to say" or "I can understand about half of her words, and when she combines them with gestures I get the main idea." 

This is for the teachers/staff who think "She's vocalizing so much! I don't want to encourage the device when she's trying to talk instead" and "He's so communicative---he'll grab our hands and point to the paint and that's a pretty clear way of saying 'let's paint', so we can leave the talker off to the side."

You are selling these kids short when you do that. You are predicting that they are trying to say something simple (let's paint) instead of something complicated (your hands can reach the paint but not mine-it's too high!). 

We are not mindreaders. This throwback post explores this point:



   

Wednesday, October 15, 2014

Getting Started with Meaningful Modeling

So you’ve got an AAC device and you’re ready to see the magic of communication unfold? 

Well, get ready to jump in and help with the unfolding, because this is going to require your participation. Look back at Monday’s post about aided language input, check out yesterday’s video clips of modeling in action, and get ready to jump in. (Or, to re-jump in, because modeling never stops. Once your child is speaking in 6 word sentences, you can model 7 word sentences. Or metaphors. Or alliteration. Or something. There’s always more.)

First things first: you have some work to do before you can start modeling.  You need to learn the language. I remember the first night we had Maya’s app: as soon as she was in bed I sat with in a tapped in and out of screens, trying to note where important words were.  Here are some tricks that might be helpful:

  • Read a children's book using your child's device. Choose something simple, substitute pronouns (he/she/it) for overly specific vocabulary that you may not have programmed yet.

  • Have a conversation with your spouse, a friend, or yourself, using only the device as your voice.
  • Look at any random thing in your line of sight and describe it using the device: what it is? what can you do with it? what are some adjectives that you could apply to it (color, texture, materials, attributes)?

This is a blue, hard chair. I can sit on the chair, and you can too. 
I can push on the chair and make it go.
She can sit on the chair and so can he, but not everyone together. 
It is a small chair, not a big one. I can step on the chair and climb up high. 
Can you step up? Be careful not to fall! 
I like the blue chair, but I love yellow chairs. What color do you like?

  • If you're a member of an online users group, see if other parents want to connect over Skype/Facetime and try to talk using only the device.

Now that you've prepped, you need to figure out what to model. 

How many words to model: If you read this post on the Speak for Yourself blog, you would have in mind that it's a good start to model one more word than the child is currently producing (sometimes I mix it up and throw in a few complete sentences---you know your user and you'll see what works best for them). 

Which words to model: Core words offer the most bang for the buck. There are only a few conversations that involve the word "rhinoceros" . . . but the words "go" "can" "make" "stop" "on" "off" "in" "out" . . . well, you probably use them everyday, many times, without even noticing. Ideally, you want to make sure you're modeling a nice mix of nouns, verbs, prepositions, etc. (Old school AAC focused a lot on making choices/requests from a field of nouns, and the verbs were shelved for too long.) All of that being said, if your child is rhinoceros obsessed, then teach it! And then, quickly, start teaching about how the rhinoceros can move or stop, is heavy, has four feet, etc.

Which types of speech to model: Besides thinking about the specific words you're modeling, think about the different ways that people use language. If you want to help model communication, you need to model all types of communication. Speech is used for a ton of different purposes. Imagine sitting with a child and a pumpkin (since fall is in the air)---here are some different types of language that can be modeled just about one pumpkin. 


Functions of Language
Examples

Labeling

“pumpkin”   “orange” “That is a pumpkin.”

Requesting

“Give me that” “Give me pumpkin” “For me”


Asking questions

 “What is that?”   “Is it heavy?” “Is it big?” “Can you pick it up?” “Can it move?”  “What can you tell me about that thing?” "Where could we look for pumpkins?" "When do you see pumpkins in the store?" "Do you know a holiday that has to do with pumpkins?"  "Who can eat a pumpkin?"

Answering questions

(answer any of the stuff above)

Getting someone’s attention

“Look! A pumpkin!”



Protesting

if the child isn’t interested in what you’re doing
“Don’t like this.” “No pumpkins!” “Hate pumpkin!” “Something different now.”


Commenting

“This pumpkin is so big!” “Pumpkins grow outside.”
“The pumpkin feels bumpy.” “I like this color.”



Teasing

“Can we cut it up and make a pumpkin pie?”
“This is my pumpkin!”
“I really like this blue pumpkin.”



Correcting

referring to the box directly above
“Not yours---mine!”
“No! Orange pumpkin!”

Bossing people around
Directing

Roll the pumpkin and have the child direct the activity:
“Go!” “Stop” “Go faster!” “Go slower!”


Negotiating/arguing

(in the activity above)
“No more game. All done.” “Not done. More now!”







Tattling

Tell the pumpkin not to roll. Tell the child that the pumpkin isn’t going to roll anymore. Roll the pumpkin and pretend that you didn’t see it happen (or have a puppet/doll push it and pretend you didn’t see)

“It went!” “More rolling!” “I saw it go!” “Naughty!” “Sneaky!” “Silly pumpkin!”

Talking about feelings

“I like the pumpkin” “The pumpkin makes me happy”

Talking about the past

“Last year we went to pick a pumpkin at the farm.”

Talking about the future

“Maybe we can go pick a pumpkin tomorrow.”

And that's not a comprehensive list of language functions, either! And it's just one silly pumpkin! Imagine all of the great stuff you could say about something that's actually cool!

At this point, you could be thinking Wait, I couldn't really say any of that stuff with our system. It's too hard to model novel sentences on, or We have a lot of specialized vocabulary but not a lot of core words, or We have a lot of nouns and requesting words but I don't think I've ever noticed the question words. Well then . . . it may be time to re-evaluate your system. If the words aren't there, or if they are there but in a way that you (as a fully literate adult without motor/access challenges) can't get to them easily, then this is not a fair long-term set-up for your child.

Hopefully you're thinking Wow! I'm really getting this! But understanding is easier than actually doing it. And that's true. You know what makes modeling easier? Planning. For some reason I just thought modeling for Maya would come naturally (which it did, a little, but certainly not to the extent that I'm discussing here). I attended the ISAAC conference in 2014 and was impressed by the amount of planning and structure that went into the AAC interventions that were presented and discussed. I realized that I should approach AAC teaching/learning the same way that I would approach any other type of teaching/learning (by planning and preparing ahead of time). 

Here are two resources that may help you to approach modeling with a bit of forethought: 

First, this brainstorming chart from the Speak for Yourself team lets you start simply---what's one thing that your child really loves---and helps you build from there. (It originally appeared here.)

Second, here's an empty copy of the chart I made above. If you're a planner, you can think about an activity (play-doh? reading a book? playing with toy cars? digging a hole outside?) and brainstorm different things that you could model. You can view and print it here.  

Remember, this is a marathon. All modeling is good modeling. Any time you use AAC to communicate, you are validating and supporting your child's use of AAC. The offerings here may help to boost your modeling game and help you target language in a more meaningful way, but don't waste one second feeling badly if you read this and thought "well, there's another thing I don't have time for." Maybe you don't have time today, but you can carve out time at some point this week to work on this (put it on your calendar). This adds up. This will make a difference.

Happy Modeling!