Monday, February 23, 2015

Ponderings on Icons, Text, and AAC, via a mini experiment

Like many children with global challenges, Maya has always been a child with a large number of goals. I realized early on that, as a human with limited energy and resources, it wouldn't be possible for me to approach all areas of development with universal enthusiasm and passion, and I decided to focus on communication and literacy. These seemed, to me, to be the cornerstones of everything else---communication is a right, and I was pretty determined to help her find a system that would serve her well. Literacy opens the doors to basically all learning (Want to learn about nature? Let's read a book about it. You like geography? We can find a book on that.). In the world of children with complex communication needs, literacy and AAC are two fields that, when displayed in a Venn diagram type of way, have a solid overlap. To someone with decent (but not deep) knowledge about the two, they seem to mostly compliment each other, except for a few fuzzy points.

This is where I note that the bulk of my reading and learning is about AAC, and I'm not a literacy expert.

Here's one area of conflict: With regards to literacy, there is research that indicates that the pairing of words with icons (like in early reader books where there is a little picture of Dora directly above the word "Dora" in the sentence) is not beneficial, and may be (or is? I don't remember) actually detrimental. This makes sense, as icons would distract the reader from the words being read, and also kind of pull their eyes out of the left-to-right flow of the sentence. 

The goal of AAC, though, is not to teach reading but rather to facilitate communication (although immersion in text-via-AAC seemed to accelerate Maya's reading ability). I imagine that there are apps/systems that only display text in their sentence strip, and there are others that display one or more icons along with their words. I imagine that "should the icons be displayed with the text in AAC" has been a question discussed and debated, particularly among those who develop these systems.

From what I've seen of AAC using/learning, it seems to me that having the icons displayed in the sentence strip along with the words being spoken makes AAC learning/using easier----particularly for young AAC users or for AAC users who are learning their system. (Side note: I, as a part-time user, am perpetually learning the system. I can't imagine when it will be effortless for me to use it, even with the automatic motor planning element. Even as I become more fluent with frequently-used words, we are constantly adding new words to the vocabulary.) While I certainly can't make any grand claims on behalf of all AAC users, I want to share what I've seen with my kids, who present as an interesting case study.

Background: In Speak for Yourself, a word can take one or two taps to say (no word takes more than two taps). When a word is selected it is spoken aloud and move to the sentence strip at the top of the screen. The two icons that were tapped to select the word are displayed under the word, as shown below.
"I" is a 1-hit word, and has 1 icon displayed. "corn" is a 2-hit word and has 2 icons displayed. First the user selects the initial icon, and that takes them to a secondary screen where they can find the second icon.


Will (2.5 years) and Maya (6.75 years): Maya has been using SFY for the past 3 years, Will has been using it for a little over a year. Both were obviously pre-literate at the time they started using the app. It's difficult for me to remember much of Maya's early AAC use (because I'm old, my memory is spotty, and I was just so excited that it was working that I wasn't scrutinizing much)---but now I am watching Will become an elective AAC user through an increasingly academic eye. So here are my take-aways, with a few minutes of video of a small experiment (taken yesterday).

First, a video of Will. For this experiment I selected two words that I knew he had never seen in SFY. I selected one (while the screen was out of his view) and then placed the talker in front of him to see if he was able to properly follow the icon path to select the word without help. 



The big take-aways: Will understands left-to-right flow of (icon) language and is able to follow it independently. Also, having the icons displayed in the sentence strip allows him to practice and copy words that he would otherwise be unable to find.

Broken down:
1. For Will, use of the app has solidified the concept that text (or icons) read from left-to-right. I don't know whether he initially learned this concept from reading stories at home or from studying the order of the icons in the app, but he gets it. He doesn't hesitate when he sees the 2-button-path to Tyrannosaurus Rex, he knows immediately that the button on the left is pressed first, followed by the one on the right.

2. Will is now, I suspect, fully able to "read" Speak for Yourself. The best idea of a "Oh yeah? Prove it!" experiment that I can come up with would be to print off a sentence of icons without including the text, and see if he can recreate the sentence. My suspicion is that this would be easy for him.

Next, a video (in two clips) of Maya. First, to include her, I asked about Tyrannosaurus Rex and escalator, but she already knew where they were (her vocabulary knowledge of SFY outpaces mine). After that I picked a word that I was 100% certain she had never seen---"Trackball" (I don't know what a Trackball is, it's a pre-programmed word in the app). Then something interesting happened: after she viewed the word, I accidentally erased it, so she no longer had the icon path on screen to follow. This leaves two possibilities for how she found the word: a) she memorized both icons in the sequence, b) she memorized the first icon and scanned the page for a word that had the text features of "trackball." I think she did the latter, since she would easily recognize "ball" and also she has long mastered starting sounds.


The big take-away: Maya uses the icon path to help her navigate towards the target word, then either uses an icon OR reading text to locate the target. (I guess I could see if the latter was correct by showing her a novel target with only the first icon and not the final target icon, and see if she was able to use decoding to find the word.)

Maya is an early reader, and her reading has been loosely assessed as at-or-above grade level. I assume that she also read the icons of SFY and that following the icons made it easier for her to practice words in the app or to copy words modeled by other people (Will is currently doing both of those things). However, learning from the icons appears to not have negatively impacted her ability to attend to text.  From what I have seen, she studies the icons in order to locate words, but she also notes the text. For commonly used words she doesn't seem to notice the icons in the sentence strip at all, but if she's in a therapy session where new words are being modeled she will lean over to closely examine the screen of the therapist's iPad, and then she will select the same icons to produce the word on Mini.

Conclusion:  Ha! There's certainly nothing to be "concluded" here, from two specific kids with one specific app in one specific home. But I found it interesting to see Will already following the icon language, and to see that Maya still uses it now for new or less frequently used words. Also, when I use a novel word I find myself staring at the icons in the sentence strip, trying to memorize the path to that word. It seems valuable to have icons displayed to facilitate and solidify AAC use/learning.  

 

Disclaimer: As always, I'm not a professional (nor do I play one on the internet). Comments, critiques, links to research, and other thoughts are welcome below or on our FB page!



Thursday, February 19, 2015

AAC Sibling + Fantastic Search Feature = (Really Cute) Success!

Last week I wrote a post that highlighted the interesting (and kind of amazing) development of AAC siblings: both in terms of their inter-sibling spoken language and their inter-sibling AAC use. That post included a video in which Maya was delightedly using the search feature in her app (Speak for Yourself) to walk her way towards words. These were words that she already knew the location of, but it was more fun to practice spelling and navigating than it simply would be to select the words. (More on how the search feature works in a minute.)

Yesterday, in a development that shouldn't have been surprising (but still kind of was), I found Will sitting in his crib after nap time using the search feature in his talker. He was opening the search feature, entering "m", selecting m&ms, and then following the path to the button for m&ms. And he was very smiley and proud of himself for figuring it out.

The concept of "search feature" might be abstract for the non-AAC-users out there, so let me break it down. Speak for Yourself has what is arguably the best search feature in the world of AAC (certainly the best one that I've seen). A user taps the magnifying glass in the upper left corner, then begins to spell the word that they're searching for.

search feature open with "m" entered

Upon entry of the first letter a scrollable list drops down, and the user can scroll through and select their target word. Two things about this list are unique (and awesome): First, you don't have to spell the whole word correctly. Maya has been able to discern the starting sound of a word for quite some time, at this feature became useful for her as soon as she could type in that initial sound. Second, the icons appear right next to the word---so a child who knows starting sounds but is pre-literate can find their target by recognizing the icon next to the word.

When the word is selected a flashing purple square outlines the path to the word. First, it flashes around the button on the main screen . . .

main screen with the word WITH highlighted-the target word (m&m) is found under that screen

then it flashes around the target word on the secondary screen.

m&m highlighted on the secondary screen

The search feature makes it easy for Maya's teachers and therapists to quickly find words that they want to model. Maya enjoys experimenting with it, but I've also seen her attempt to use it purposefully---when I say "where's 'spectacular' " and she furrows her brow, pauses, and then opens up the search and types in "s." And now Will is in on the game.

Here he is (that's a chocolate ice cream mustache), searching:




AAC siblings are a special kind of awesome. And also, apparently the search feature in SFY is so easy to use that a two year old can use it.


Note #1: If you've got an eye for detail, you may notice that the coloring of the screen in the still pictures differs from the color-coding of the screen in the video. The still shots were provided by another user of SFY and reflect her color-coding. (I wrote this at my college library and didn't have a talker on hand to get still shots!) Different users and families choose to color code the buttons in different ways. 

Note #2: I am not an employee of Speak for Yourself.

Sunday, February 15, 2015

The Remarkableness of AAC Siblings (part 1 of 7,682)

I don't have 7,682 posts lined up--but as I watch Maya and Will interact with their talkers, I stop to note how remarkable it is, in the truest sense of that word. Several people have asked me about their interaction, both in general (curious about what sibling play and interaction looks like, when language is so limited) and related to the talkers (curious about whether they sit and communicate through the talkers).

In general, they play and bicker and tease and celebrate, just like any siblings. Will's verbal language is just now at the tipping point that I've anticipated . . . the point where he begins to surpass her with speech. They both are mimicking a startling amount, but his imitation of my words and phrases are more clearly articulated than hers (although hers are hugely noteworthy, since prior to now she wouldn't really try much in the way of spontaneous imitation). Sometimes they play quietly, sometimes they boss each other around ("Right here, Will!" "No, Mema!"), and sometimes they argue loudly. The arguing is a combination of "No!" and yelling strings of nonsense at each other (which is somewhat hilarious). They have at least one word that is meaningful only to them ("dah-tah") and despite multiple attempts at asking each of them what it means, I'm left clueless. Maya always shakes her finger at him when she says it, and they both use it when someone is getting in trouble, or when one person is aggravated and the other is teasing, or when there is amused fighting. They also understand each other kind of effortlessly, and when I can't figure out what someone is saying, and the speaker can't translate him/herself with a talker, I ask the sibling "Do you know what s/he is saying?"



Their talker-related interaction has just started to evolve recently. Up until this point they would each use a talker independently, and if one of them was using a talker the other would certainly take note and become an active listener, but that was about it. So, if Maya had Mini at the table and started to say something, Will would try to lean over or pull Mini sideways so that he could watch what she was saying. Sometimes the listened would then try to take a turn on the talker (Will reaching over to say something with Mini after Maya was done) but this was often met with resistance from the speaker ("No!" and pulling the talker back). This type of exchange went both ways.

Recently, there's been a shift. It started one night when I set them both up next to each other at dinner, with talkers on the table. Maya said something . . . and Will quickly copied it. This happened a few times, then they switched---Will said something and Maya copied it. This game continued for the duration of dinner. Hours later, when the kids were playing in the living room, I heard Will say some of the words that Maya had showed him at dinner. He's an impressively fast AAC learner.

This video was taken last night. The kids are eating (and Will is protesting the appearance of the camera). Maya has decided to experiment with/practice using the search feature, and while I'm not sure if her first search was deliberate, there is no doubt that the following one was. (She types in a letter and it pulls up a list of words, which she scrolls through to find her target.) Will and Maya take turns prompting each other, when they appear unsure that the sibling can find what they're looking for, or maybe when they think the other is taking too long?





It feels noteworthy, this type of exchange. Their joint attention is pretty cool, as is their patience and prompting and turn-taking in this little game. They've moved from experimenting next to each other with talkers to experimenting together, certainly. I imagine the last step will be actually talking to/with each other through the devices, using them conversationally. I'm kind of on the edge of my seat to see that unfold.


 


Sunday, February 1, 2015

Maya reads a book

Nonverbal children struggle to learn phonics, they said.

Children with apraxia and other speech sound disorders are at high risk of literacy-related difficulties, they said.

With regard to cognitive functioning, Maya is in the 0.4th percentile when compared to same-age peers, they said.


They didn't know.


We didn't know either . . . . . . . . . . . . . . . . . . . but we presumed competence.


Maya, this afternoon, reading:




Notes: The words in parentheses are the words that appear in the text that she omitted when she read aloud. Also, this is a long clip. There's a chunk in the middle that's not very interesting. But I don't like watching videos that are all chopped up, because I wonder what may have been edited out, so I left it long.


There are a few interesting things here. First, a skeptic could wonder whether she's really saying the correct words (since her speech sounds are so limited/garbled)---this is why I picked a random word (ride) and had her clarify with her device. I've done other activities in which I asked her to read sentences solely with her talker, and she did so correctly. So, if you don't believe that she's reading accurately . . . well, then, that's totally your right---but I hope you (and your skepticism) are employed far away from the classroom/therapy/special needs sector.

Second, she seems to really understand the text. When I asked her about the reindeer's name, she looked back and found it. When Anna fell off the horse and it was cold, she said "Oh no!"

Third, her word omissions are interesting. She will often drop words that don't change the meaning of the text (the, a, an, etc). She also omits those words in speech and when using Mini, and I wonder if generating them (via speech or AAC) just seems not worth the effort? And, if so, is she reading them and choosing not to generate them, or is she not seeing them there at all? I wonder if there is research about similar omissions among children with speech difficulties or AAC users. She also skips reading "Anna" on most pages---maybe that word keeps throwing her off, or maybe she doesn't have an easy way to say it? I'm not sure.

And the most interesting thing, of course, is that she blows me away. She reads over my shoulder now. She can pick out words in my intentionally-sloppy handwriting.

She is, undeniably, a reader.

If you're a literacy or special ed person with thoughts (even if they are that I'm doing something wrong, or should be doing something differently) I would love to hear from you (here, on FB, or at uncommonfeedback@gmail.com).