I have build an app, the app have three activities (ActivityA, ActivityB, ActivityC). These activities has some relationship with previous activity. Imaging that ActivityA initialize some service to run on background, ActivityC do some operations, and get some results for bringing back to ActivityB.

ActivityA -> ActivityB -> ActivityC

There is another circumstance that AcitivityC can be launched by an intent without ActivityA or ActivityB. But in this circumstance initialization can't be called in ActivityA, also, when user press the back button on phone,the results can't be delivered to ActivityB, In ActivityB, I receive the results like below:

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    switch (requestCode) {
        case Constant.REQUEST_CODE_ENABLE_NET:
            if(resultCode == Constant.RESULT_CODE_NET_ENABLED) {
              //imaging that I get the device list and show it in a listview here.

            }
            break;
    }
}

How to handle this circumstance, When launch ActivityC whatever by ActivityB or by other way, the ActivityA and ActivityB are sure to be launched at first?

Thanks any help.

The docs state:

As mentioned previously, next to the main sourceSet is the androidTest sourceSet, located by default in src/androidTest/ .... The sourceSet should not contain an AndroidManifest.xml as it is automatically generated.

So, if I want to add extra permissions for the tests, what is the correct way to do it?

In my Android project, Here is my code.

for (int x = 0; x < targetBitArray.length; x += weight) {
    for (int y = 0; y < targetBitArray[x].length; y += weight) {
        targetBitArray[x][y] = bmp.getPixel(x, y) == mSearchColor;
    }
}

but this code wastes a lot of time.

So I need to find way faster than bitmap.getPixel(). I'm trying to get pixel color using byte array converted from bitmap, but I can't. How to replace Bitmap.getPixel()?

enter image description here

As shown in the above image, all the necessary information have been filled but however the "OK" button is still disabled for me to proceed? May I know which part is going wrong?

https://code.google.com/p/android-coverflow/

I am using above example for coverflow.When I select an image I am setting the alpha value of the view is 0.5 else 1.When i select an image the value is changing but the problem is when I am scrolling vertically to the other image and then get back to the image what I have selected I am seeing that another image is embedding with the selected image.Below is my code.

private void setupListeners(final CoverFlow mCoverFlow) {

    mCoverFlow.setOnItemClickListener(new OnItemClickListener() {

        @Override

        public void onItemClick(final AdapterView< ? > parent,
                final View view, final int position, final long id) {
            /
            Log.d("id1", String.valueOf(id));

            int selected = (int) id;
            String match=String.valueOf(selected);
            StringBuilder sb = new StringBuilder();
            if (Constants.selected_position.size() > 0) {


                    for (Integer s : Constants.selected_position)
                    {
                        sb.append(s);
                        sb.append(",");
                    }
                    String strfromArrayList = sb.toString();
                    if (strfromArrayList.contains(match)) {
                        for(int i=0;i<Constants.selected_position.size();i++){
                            if(Constants.selected_position.get(i)==id){
                                Constants.selected_position.remove(i);
                            }
                        }
                        view.setAlpha(1);
                        // createBitmap1(position,view,parent);
                        selectedimages--;
                     //   select =    ;

                    }else{
                        view.setAlpha((float) 0.5);
                       // createBitmap(position,view,parent);
                        selectedimages++;
                        Constants.selected_position.add(selected);
                    }


            } 
            else{
                view.setAlpha((float) 0.5);
               // createBitmap(position,view,parent);
                selectedimages++;
                Constants.selected_position.add(selected);
                }    
}

I am using BaseAdapter in Android.These are my codes.

 public View getView(int position, View convertView, ViewGroup parent) {
        ViewHolder holder;
        if (convertView == null) {
            convertView = layoutInflater.inflate(R.layout.article_list_item, null);
            holder = new ViewHolder();
            holder.favoriteImage= (ImageView) convertView.findViewById(R.id.imageViewFavoriteItem);
            convertView.setTag(holder);
        } else {
            holder = (ViewHolder) convertView.getTag();
        }
        ArticleItem articleItem = (ArticleItem) listData.get(position);
        String isFavorite= articleItem.get_favorite();
        if(isFavorite.equals("1"))
        {
            holder.favoriteImage.setImageResource(R.drawable.ic_star_active);
        }

        return convertView;
    }

The problem is I have a list of items and I want to add "Star Icon" to the item which are added to favorite.

        String isFavorite= articleItem.get_favorite();
        if(isFavorite.equals("1"))
        {
           //Statement is okay
        }

However I can't manage to use like this.

        String isFavorite= articleItem.get_favorite();
        if(isFavorite.equals("1"))
        {
            holder.favoriteImage.setImageResource(R.drawable.ic_star_active);
        }

I did some research and came to this. Android BaseAdapter Context

an Activity is a single, focused thing that the user can do. Almost all activities interact with the user, so the Activity class takes care of creating a window for you in which you can place your UI.
a Fragment is a piece of an application's user interface or behavior that can be placed in an Activity.
an Adapter object acts as a bridge between an AdapterView and the underlying data for that view. It is also responsible for making a View for each item in the data set.

According to what the answer said I can't use in Adapter. I am not sure as I came from PHP background.I am now stuck .

I currently building an application using zxing qrcode scanner. I am thinking about using the camera from the smart watch instead of using the phone camera.

But I am not sure if it is even possible to do that with zxing.

I want to know if any Android masters had done it before, maybe you guys can give me some hints?

In my app i have to store a url in a variable. I have to get the url from a static url (example.com) where remain a dynamic url which i will have to store as a variable value. How can do it,please

I've got one bitmap on a canvas (backgroundBitmap) that I want to remain unchanged and another smaller bitmap (draggableBitmap) that I want the user to be able to drag above backgroundBitmap. (And I do mean "above" as in z-axis).

My thinking is that I just redraw the background with each ACTION_MOVE. When I do this with a solid color, it works perfectly. When I redraw the backgroundBitmap instead of the color, the backgroundBitmap remains visible but the draggableBitmap just repeats itself along the dragged path. Why is the solid color working to "clear" the image and a bitmap won't?

@Override
public boolean onTouchEvent(MotionEvent event) {
    float touchX = event.getX();
    float touchY = event.getY();

    switch (event.getAction()) {
        case MotionEvent.ACTION_DOWN:
            // Nothing here
            break;
        case MotionEvent.ACTION_MOVE:
            // Drawing the bitmap with the line below only draws the bitmap once
            // (or so it seems)
            drawCanvas.drawBitmap(backgroundBitmap, 0, 0, null);
            // Drawing the color with the line below works!!!
            // drawCanvas.drawColor(Color.BLACK);
            drawCanvas.drawBitmap(draggableBitmap, touchX, touchY, null);
            break;
        case MotionEvent.ACTION_UP:
            // Nothing here
            break;
        default:
            return false;
    }

    invalidate();
    return true;
}

I'm trying to find the common terminology used for a concept. First of all, some background. We develop software for business where they host their own database and have many of their own computers which connect to their own server/database. Now, we're introducing cloud integration, which consists of things like connecting our software/database to their eCommerce website, mobile applications, web applications, etc.

To avoid requiring them to configure their own web server (exposing their database by opening ports), we would like to provide (by default) a global web server which acts as a relay between their database and everything else on the internet. The connection is established the opposite way around: A service would run along-side their database which would connect to this global server, and then listen for requests through this socket. We'll still provide the option for them to connect directly to their own server (if they want to bypass our global server and configure their own web server), but for ease of use, it would use this single global server by default. This will simply be a "dummy" server providing only forwarding requests from various web devices into their own local server.

Connection Mechanism

Regardless of what protocols I use, what standard terminology is used to describe such a scenario? My best guess is "Socket Relay Server" but I'm sure there's a more common term.

Are there any limitations to developing a video-centric (playback only) mobile app using Cordova/PhoneGap versus native development? For example, is device and OS support of the HTML5 video tag challenging and inconsistent when using Cordova/PhoneGap? And if so, are these same challenges encountered with native app development?

Im experiencing a problem with MFMailComposeViewController in iPad:

When i attach a file in an iPhone device the document is attached for download but, when i do it in an iPad, the content of the pdf is visible in the mail body.

I load the view in this way:

    MFMailComposeViewController *mailComposer = [[MFMailComposeViewController alloc] init];
    mailComposer.mailComposeDelegate = self;
    [mailComposer setSubject:@"TEST"];
    [mailComposer setModalTransitionStyle:UIModalTransitionStyleCoverVertical];
    [mailComposer addAttachmentData:[NSData dataWithContentsOfFile:dataPath] mimeType:@"application/pdf" fileName:@"TEST.pdf"];
    [self presentViewController:mailComposer animated:YES completion:nil];

I dont understand whats wrong here, any idea?

Thanks and regards

I am using Storyboard, and a TabBarController as my base controller, which will link to both Navigation Controllers and regular View Controllers.

When linking up View Controllers using the relationship segue option, this is what I see after hooking up more than one view controller.

enter image description here

While this isn't a particularly large issue (at run time the the large blocks do not show), it does present a problem when I want to alter the TabBar item images, as well as just working with the general views. I was wondering if anyone else has encountered this issue, and if so have found a workaround?

Thanks.

I'm trying to compute the histogram of an image using vImage's vImageHistogramCalculation_ARGBFFFF, but I'm getting a vImage_Error of type kvImageNullPointerArgument (error code a -21772).

Here's my code:

- (void)histogramForImage:(UIImage *)image {

    //setup inBuffer
    vImage_Buffer inBuffer;

    //Get CGImage from UIImage
    CGImageRef img = image.CGImage;

    //create vImage_Buffer with data from CGImageRef
    CGDataProviderRef inProvider = CGImageGetDataProvider(img);
    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

    //The next three lines set up the inBuffer object
    inBuffer.width = CGImageGetWidth(img);
    inBuffer.height = CGImageGetHeight(img);
    inBuffer.rowBytes = CGImageGetBytesPerRow(img);

    //This sets the pointer to the data for the inBuffer object
    inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);

    //Prepare the parameters to pass to vImageHistogramCalculation_ARGBFFFF
    vImagePixelCount *histogram[4] = {0};
    unsigned int histogram_entries = 4;
    Pixel_F minVal = 0;
    Pixel_F maxVal = 255;
    vImage_Flags flags = kvImageNoFlags;

    vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer,
                                                             histogram,
                                                             histogram_entries,
                                                             minVal,
                                                             maxVal,
                                                             flags);
    if (error) {
        NSLog(@"error %ld", error);
    }

    //clean up
    CGDataProviderRelease(inProvider);
}

I suspect it has something to do with my histogram parameter, which, according to the docs, is supposed to be "a pointer to an array of four histograms". Am I declaring it correctly?

Thanks.

I've put a UIImageView into my XIB file, and every time I press the button to visit the screen with it on, I get a SIGABRT. Does anyone know why this might be?

I'd like to add UIButton to my UIViewController. I have a UIPageViewController on all screen. When I'm trying to add this button... there is no button on the sreen visible. What do I do wrong?

CODE:

UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
    [button addTarget:self
               action:@selector(setButtonVisibleClose:)
     forControlEvents:UIControlEventTouchUpInside];
    [button setTitle:@"Zamknij widok" forState:UIControlStateNormal];
    button.frame = CGRectMake(80.0, 210.0, 160.0, 40.0);
    button.backgroundColor = [UIColor whiteColor];
    [self.view addSubview: button];


    self.pageController = [[UIPageViewController alloc] initWithTransitionStyle:UIPageViewControllerTransitionStyleScroll navigationOrientation:UIPageViewControllerNavigationOrientationHorizontal options:nil];

    self.pageController.dataSource = self;
    [[self.pageController view] setFrame:[[self view] bounds]];

    BasicViewViewController *initialViewController = [self viewControllerAtIndex:0];

    [initialViewController setImageViewToDisplay];

    NSArray *viewControllers = [NSArray arrayWithObject:initialViewController];

    [self.pageController setViewControllers:viewControllers direction:UIPageViewControllerNavigationDirectionForward animated:NO completion:nil];

    [self addChildViewController:self.pageController];
    [[self view] addSubview:[self.pageController view]];
    [self.pageController didMoveToParentViewController:self];

In the Xcode 6 betas, when I delete a constraint, it doesn't remove it completely, but grays it out. I thought that was to imply that the constraint was used in a different size class, but that doesn't seem to be the case. Also, how to you permanently delete these constraints?

So I am trying to have my button's border change color when pressed and I'm finding some issues. This is what I have:

UIColor *blackColor;
UIColor *transBlack = [blackColor colorWithAlphaComponent:05f];
self.layer.borderColor = [UIColor transBlack].CGColor;

Now the last line is giving me an error which reads "No known class method for selector 'transBlack' " and "Property 'CGColor' not found on object of type 'id' " I have no idea what either of these mean. I'd like to get that last line to work, and if you could explain to me why the compiler is complaining that would be very helpful.

Any and all help would be greatly appreciated.

Edit: So I tried using a different method

colorWithHue:0 saturation:0 brightness: 0 alpha: 0.5

and that seems to have broken my button's push outlet. I'm not sure why yet.

Edit2:

This seemed to correct the original issue of using colorWithAlphaComponent

UIColor *transBlack = [[UIColor blackColor] colorWithAlphaComponent:0.5f];
self.layer.borderColor = transBlack.CGColor;

for more information please look at the selected answer.

I have a very annoying problem. I have a ViewController with an UIImageView in it. The UIImageView should display a slide show. The images for the are coming from NSURL, so it takes a bit of time.

- (void)viewDidLoad { [super viewDidLoad]; [self loadImages]; }

This is how I get the Images from the NSURL. The problem I have is, that while the Images are loading I only see a black screen. At the beginning and at the end of the -(void)loadImages method I implemented a UIActivityIndicator to display the loading time. I already tried -(void)viewWillAppear and -(void)viewDidLayoutSubviews but nothing worked.

Thanks for help Jannes

I am currently building a custom keyboard and I am almost done. One problem that I have is with the delete button. When the user taps the delete button, it does what it should do and deletes the previous text entry. However when the user holds the button down, nothing happens. How do I make it so that when the user holds down the delete button, the keyboard continuously deletes like in the standard ios keyboard? This is my current code:

pragma mark Keyboards

- (void)addGesturesToKeyboard{
[self.keyboard.deleteKey addTarget:self action:@selector(pressDeleteKey)forControlEvents:UIControlEventTouchUpInside];

and:

-(void)pressDeleteKey{
[self.textDocumentProxy deleteBackward];
}

Thanks for your help.

You can run the Swift REPL with a couple of different options for the --sdk option. You can run:

xcrun swift -v -sdk $(xcrun --show-sdk-path --sdk iphonesimulator)

or

xcrun swift -v -sdk $(xcrun --show-sdk-path --sdk macosx)

There is also

xcrun swift -v -sdk $(xcrun --show-sdk-path --sdk iphoneos)

Which doesn't seem to work very well and causes lots of errors.
How will my output differ when using the iphonesimulator sdk vs the macosx sdk?