I'm trying to use the library to perform volley download images from my server. In my activity I add items dynamically and then realize the exchange of image at runtime.

Below is the code of the attempt to get the picture:

public void updateThumbnails(ArrayList<Book> arrBook,ArrayList<View> arrView){
if(arrBook.size()<= 0){
    return;
}
if(arrView.size() <= 0){
    return;
}
int intBooks = arrView.size();
ImageLoader imageLoader = AppController.getInstance().getImageLoader();
for(int intIndex = 0; intIndex < intBooks; intIndex++){
    View _view = arrView.get(intIndex);
    final View _viewLoader = _view;
    imageLoader.get(Const.START_REQUEST_BOOK_IMAGE + arrBook.get(intIndex).getId().toString() + ".jpg", new ImageLoader.ImageListener() {
        @Override
        public void onResponse(ImageLoader.ImageContainer imageContainer, boolean b) {
            ImageView imgBook = (ImageView) _viewLoader.findViewById(R.id.img_book);
            animationChangeImage(imageContainer.getBitmap(),imgBook);
        }

        @Override
        public void onErrorResponse(VolleyError volleyError) {

        }
    });
    TextView txtTitleBook = (TextView) _view.findViewById(R.id.name_book);
    txtTitleBook.setVisibility(View.INVISIBLE);
}

}

I'm trying to create a Boundary boxes with OpenCV+Android. But the opencv tutorial in C++. and need convert following code to Opencv base Android.already i referred this . need help from java and opencv experts.

vector<vector<Point> > contours;
  vector<Vec4i> hierarchy;    
  /// Detect edges using Threshold
  threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
  /// Find contours
  findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );  
 //up to this converted      //........................................................................

  /// Approximate contours to polygons + get bounding rects and circles
  vector<vector<Point> > contours_poly( contours.size() );
  vector<Rect> boundRect( contours.size() );
  vector<Point2f>center( contours.size() );
  vector<float>radius( contours.size() );    
  for( int i = 0; i < contours.size(); i++ )
     { approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
       boundRect[i] = boundingRect( Mat(contours_poly[i]) );
       minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
     }    
  /// Draw polygonal contour + bonding rects + circles
  Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
  for( int i = 0; i< contours.size(); i++ )
     {
       Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
       drawContours( drawing, contours_poly, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
       rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
       circle( drawing, center[i], (int)radius[i], color, 2, 8, 0 );
     } 

This is my conversion up to findContours, please help me to extract the object.

    Imgproc.cvtColor(ImageMatin, ImageMatBk, Imgproc.COLOR_RGB2GRAY,8);
    //Sobel operator in horizontal direction.                       
    Imgproc.Sobel(ImageMatBk,ImageMatout,CvType.CV_8U,1,0,3,1,0.4,Imgproc.BORDER_DEFAULT);
    //Converting GaussianBlur                   
   Imgproc.GaussianBlur(ImageMatout, ImageMatout, new Size(5,5),2); 

    Imgproc.threshold(ImageMatout, ImageMatout, 0, 255, Imgproc.THRESH_OTSU);

    ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
    Imgproc.findContours(ImageMatout, contours, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE);

I want to read a Json url using sockets and display it in a textbox . When I am using urls other than my given url, it showing the output file. But its not the same case with my url. the content in the url is very large around 184523 charcters. Here is the code that I am using: public class MainActivity extends Activity { URLConnection feedUrl = null; String json=""; TextView txt; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main);

   new AsyncTask<Void, Void, Void>(){

        @Override
        protected void onPreExecute() {
            txt=(TextView)findViewById(R.id.myresponse);
            txt.setText("intialize");
        }
        @Override
        protected Void doInBackground(Void... params) {
            URLConnection feedUrl;
            try {
                feedUrl = new URL("http://earthquake.usgs.gov/earthquakes/feed/geojsonp/2.5/week").openConnection();
                InputStream is = feedUrl.getInputStream();

                BufferedReader reader = new BufferedReader(new InputStreamReader(is,"UTF-8"));
                StringBuilder sb = new StringBuilder();
               String line = null;

               while ((line = reader.readLine()) != null) {
                    sb.append(line);
                }
                reader.close();

             json=  sb.toString();

            }catch(Exception e){
                e.printStackTrace();
            }

            return null;
        }

        @Override
        protected void onPostExecute(Void result) {
            txt.setText(json);
        }
    }.execute();
}
}

URL is http://earthquake.usgs.gov/earthquakes/feed/geojsonp/2.5/week. Please can someone tell why it is not running.

One more que: What should I do , if I want to print only a specified object of the JSON file. say, if I want to print the enitre 5th object instead of the whole file. What to do in that case?

Thanks in advannce :)

I have seen many different replies to this so can anyone give a positive reply please? Does a device running a recent OS need to be rooted to take screenshots from code? I am trying to do this and I just get a black picture (of the same dimensions as my screen). Many thanks

I am new to live streaming in iOS Application development.

I am developing an application in which I need to send live stream captured from iPhone camera through RTMP protocol over a secure(password protected) channel/publisher onto wowza server.

I have tried Medialib and Video-Core for this purpose. I am sending my RTMP URL as rtmp://userName:Password@domain.com:port/path/streamName.

These libraries are not supporting over this secure channel while these were working fine on public channel as URL rtmp://domain.com:port/path/streamName.

I have tried a lot more present over internet.

Is there any solution to my problem ?

Thanks in Advance.

I have begun to make a character (body, legs and feet) using SpriteKit. So far so good.

I now wish to animate the character on touch, the character should lift a leg (_leg2) perpendicular to the _body, at a rotation of -90 degrees. So the leg sticks out and stays out.

I have used SKPhysicsJointPin to attach _leg2 to _body.

See my code for createCharacter below:

-(void)createCharacter {

    // Add sprites

    _body = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(40, 60)];
    _body.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
    [self addChild: _body];

    _leg1 = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(14, 60)];
    _leg1.position = CGPointMake(_body.position.x+20-7, _body.position.y-70);
    [self addChild:_leg1];

    _foot1 = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(20, 10)];
    _foot1.position = CGPointMake(_leg1.position.x-2.5, _leg1.position.y-40);
    [self addChild:_foot1];

    _leg2 = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(14, 60)];
    _leg2.position = CGPointMake(_body.position.x-20+7, _body.position.y-70);
    [self addChild:_leg2];

    _foot2 = [SKSpriteNode spriteNodeWithColor:[SKColor purpleColor] size:CGSizeMake(20, 10)];
    _foot2.position = CGPointMake(_leg2.position.x-2.5, _leg2.position.y-40);
    [self addChild:_foot2];

    // Add physics bodies to sprites

    _body.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_body.size];

    _leg1.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_leg1.size];
    _foot1.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_foot1.size];
    _foot1.physicsBody.mass = 0.5;

    _leg2.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_leg2.size];
    _foot2.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_foot2.size];
    _foot2.physicsBody.mass = 0.5;

    // Add joints

    SKPhysicsJoint *hipJoint = [SKPhysicsJointFixed jointWithBodyA:_body.physicsBody bodyB:_leg1.physicsBody anchor:_leg1.position];
    SKPhysicsJoint *ankleJoint = [SKPhysicsJointFixed jointWithBodyA:_leg1.physicsBody bodyB:_foot1.physicsBody anchor:_foot1.position];

    SKPhysicsJointPin *hipJoint2 = [SKPhysicsJointPin jointWithBodyA:_body.physicsBody bodyB:_leg2.physicsBody anchor:CGPointMake(_leg2.position.x, CGRectGetMaxY(_leg2.frame))];
    SKPhysicsJoint *ankleJoint2 = [SKPhysicsJointFixed jointWithBodyA:_leg2.physicsBody bodyB:_foot2.physicsBody anchor:_foot2.position];

    [self.scene.physicsWorld addJoint:hipJoint];
    [self.scene.physicsWorld addJoint:ankleJoint];

    [self.scene.physicsWorld addJoint:hipJoint2];
    [self.scene.physicsWorld addJoint:ankleJoint2];

}

When it comes to animating the leg, I'm not sure how to approach it. Whether to use SKAction or apply an impulse. The initial touch needs to rotate the leg out and 'stay' out until the touch has ended.

Another requirement is that the leg needs always stick out from the body regardless of the _body's rotation.

This doesn't seem to cut it unfortunately:

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {

    SKAction *rotateLeg = [SKAction rotateToAngle:-M_PI_2 duration:0.1f];
    [_leg2 runAction:rotateLeg];
}

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {

}

Any guidance or help would be really appreciated.

Thanks in advance.

I'm getting occasional crashes and otherwise strange behaviour from my table view

Screenshots

enter image description here enter image description here

Other clues:

  • This seems only to happen when loading the view for the first time, as opposed to on calling my refresh method
  • This table loads its sections and rows depending on some NSMutableDictionarys and NSMutableArrays
  • Often, there's no crash but the top cell or top few cells are animating in a strange way (e.g. disclosure indicator repeatedly flying from left to right across the width of the cell!)
  • I get EXACTLY the same issue with another of my table views (not detailed here, for brevity), with a very similar error also tracing back to cellForRowAtIndexPath! The code for this table is structured in a very similar way (i.e. viewDidLoad kicks off a whole refresh process)

Therefore, I suspect some sort of conflict between the original (automatic) loadup and the refresh that I kick off in viewDidLoad

Some code. Note the multi-threading - I think this is the culprit...

@interface RA_MyShouts ()
@property NSMutableArray *sectionHeadings; // ordered array of strings like, "Today", "Next Wednesday", etc.
@property NSMutableDictionary *sectionToObjectsMap; // maps sectionHeading strings to objects
@property NSMutableDictionary *contentForCells; // maps objectIds to NSDictionary objects
@end

@implementation RA_MyShouts

- (void)viewDidLoad
{
    NSLog(@"%@ on thread %@", NSStringFromSelector(_cmd), [NSThread currentThread]);

    [super viewDidLoad];

    // Initialize properties
    self.sectionHeadings = [NSMutableArray array];
    self.sectionToObjectsMap = [NSMutableDictionary dictionary];

    // Load up the table
    [self fullRefresh];
}

-(void)fullRefresh
{
    // Prepare the progress HUD
    MBProgressHUD *HUD = [[MBProgressHUD alloc] initWithView:self.navigationController.view];
    [self.navigationController.view addSubview:HUD];
    HUD.delegate = self;

    // Set the progress HUD spinning on the main thread, while -prepareTable runs on another thread
    [HUD showWhileExecuting:@selector(prepareTable) onTarget:self withObject:nil animated:YES];
}

-(void)prepareTable
{
    NSLog(@"%@ on thread %@", NSStringFromSelector(_cmd), [NSThread currentThread]);

    [self doSomeTimeConsumingModificationsToMyMutableProperties];

    // Reload data
    NSLog(@"Calling reloadData on thread %@", [NSThread currentThread]);
    [self.tableView performSelectorOnMainThread:@selector(reloadData) withObject:nil waitUntilDone:YES];
}

- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
    NSLog(@"%@ on thread %@", NSStringFromSelector(_cmd), [NSThread currentThread]);

    return 1;
}

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
    NSLog(@"%@ on thread %@", NSStringFromSelector(_cmd), [NSThread currentThread]);

    return [self.orderedResults count];
}

// Main thread version
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
    NSLog(@"%@ on thread %@", NSStringFromSelector(_cmd), [NSThread currentThread]);

    RA_RecommendedMatchCell *cell = [tableView dequeueReusableCellWithIdentifier:@"recommended_match_cell" forIndexPath:indexPath];

    RA_ParseShout *shout = [self.orderedResults objectAtIndex:indexPath.row];
    NSString *shoutId = shout.objectId;
    NSDictionary *content = [self.contentForCells objectForKey:shoutId];
    [cell configureCellWithContent:content];

    return cell;
}

I'm relatively new to ReactiveCocoa and was wondering how to chain a sequence of REST GET calls together so they perform in order. If one of the calls errors, then the whole process will roll back.

So I'm using pod 'AFNetworking-RACExtensions', '0.1.1' I have a NSArray of signals. Most of these signals look like this:

 - (RACSignal *) form
{
@weakify(self);

RACSubject *repSubject = [RACSubject subject];
[[ServiceClient getForm] subscribeNext:^(RACTuple *jsonTuple) {
    if ([jsonTuple second])
    {
        // create core data objects here            
        [repSubject sendNext: nil];
        [repSubject sendCompleted];
    }
} error:^(NSError *error) {
    [repSubject sendError: error];
}];
return repSubject;     
}

So a load of signals like this are in a NSArray. I want process those calls in the order they appear in array one after another and have a shared error handler and completion block. I think I have had success with not using the nsarray and had code like this:

@weakify(self);
[[[[[[self form] flattenMap:^(id value) {
    // perform your custom business logic
    @strongify(self);
   return [self signal2];
}] flattenMap:^(id value) {
    // perform your custom business logic
    @strongify(self);
    return [self signal3];
}] flattenMap:^(id value) {
    // perform your custom business logic
    @strongify(self);
    return [self signal4];
}] flattenMap:^(id value) {
    // perform your custom business logic
    @strongify(self);
    return [self signal5];
}] subscribeError:^(NSError *error) {
      @strongify(self);
      [self handleError: error];
  } completed:^{
      // Successful Full Sync
      // post notification
  }];    

How can I do all of these with using an NSArray of signals while still being able to use subscribeError and completed blocks?

I'm assuming it's something like:

 @weakify(self);
 [[[array.rac_sequence flattenMap:^RACStream *(id value) {
     // dunno what to do
 }] subscribeError:^(NSError *error) {
      @strongify(self);
      [self handleError: error];
  } completed:^{
      // Successful Full Sync
      // post notification
  }];

I have an iOS app that runs a web ui in one of it's views. The application that appears in the web view is a Rails application with Foundation for the front-end. For some reason, forms in the web view don't work correctly in my iOS simulator. When I click drop-down menus, this is the result:

iOS simulator screenshot

Is this a problem unique to the simulator or will it persist onto actual devices?

I'm using XCode 6 beta and Rails 4.1.1 with Foundation 5. Ruby -v 2.1.2

I have got a Tab Bar Controller inside a Navigation Controller but I cant seem to set the Navigation bar title or add a button to the navigation bar using:

self.title = @"My Name";

The code above only changes the Tab Bar Item name and not the navigation controller.

Secondly. I want to disable going back the login screen (The screen with the UIWebview over it in the screenshot)

EDIT: I found a possible duplicate

Overview Storyboard

Is there any way to pin phone number dial on start? (It should look like windows icon>dial phone number icon>dial number>call)

In Windows Phone 8.1 we have "Files" a File Explorer. I have found location for Images in SD Card but unable to see the downloaded Audio Files there.

Is there any specific folder to search for audio files.

In iOS 7 Apple uses background aware text for the text in the camera app when you make a panorama picture. It looks like this:

Normal backgroundLight background

As the background changes the text gets a darker shadow. How would one recreate this using CSS and possibly jQuery?

Basically, I am creating an Android App, and the basic problem is, the button looks way too ugly. I want to recognise the swipe or spin to the image I provide, so that it just start a new activity. For example: I have an image of the bus, and if i spin it, it should start the activity of finding all the routes that the bus follows. The activity starting is not a concern. I just want to recognise the spin movement.

Thank You!

I am developing an application in which there is 1 Master user and several other Child user(Consider 5 Child users).

How can Master user track the real time location of all the other 5 Child users using Phonegap (I am developing App using Android platform).

I want to do it using geolocation from a Phonegap application

Any help would be appreciated.

I am working on DB using LoaderManager.

I have made Local DB & Table.

Here's problem:

@Override
    public Loader<Cursor> onCreateLoader(int id, Bundle args) {

        /*
        Here's where to implement the code to instantiate & return a new loader.
         */
        return new CursorLoader(this,
                                URI??,
                                PROJECTION,
                                null,
                                null,
                                null);
    }

Returned CursorLoader should get 'Second Parameter in Uri type' pointing the table to query.

But I have no idea how to put table name in second parameter.

Please give me a solution.

So I create a file in my app in a directory, i want to be able to get the file from my tablet by just connecting it to my laptop and getting the file from the directory but am having trouble. I remember reading somewhere that you have to restart your device and the file just magically appears in the directory and it worked! I don't want to have to restart my device every time, is there any way around this?

File emulatedStorage = Environment.getExternalStorageDirectory();
File directory = new File(emulatedStorage.getAbsolutePath()+"/logger");

//check if directory exists.
if(!directory.exists()){
directory.mkdirs();
}
//make file and write stuff to it etc...

so like I said the file shows up when i restart my tablet, any suggestions or explanations?

Currently we are using UIAutomator to test our application and all UI elements are accessible by UIAutomator.

We usually build APK with additional code to show dialog, indicating successful completion of test-case (i.e. operation invoked by UIAutomator), to inform UIAutomator to proceed with next test-case.

Code which is responsible to show dialog is not committed into repository and maintained as patches and not allowed to be committed in repository. For this reason, whenever we want to execute UIAutomator tests, we build APK with additional code residing in patches.

My question: Is there any other way to communicate UIAutomator about successful completion of test-case (i.e. application has completed the operation invoked by UIAutomator), without using dialog.

Why we need: To execute UIAutomator tests on release-candidate builds.

What I tried: Set constant delay between test-cases invocation.

But I cannot set constant delay between test-cases, as execution time varies based on test-data and device/environment.

I thought of BroadcastReceiver, but I don't know how to register from UIAutomator?

Is there any other mechanism / workaround to achieve this functionality?

I want to know is there any method can detect my user uninstall/remove my Android app??? Although I have Google Analytic to know how many user uninstall my app. But I want to know the exact data of how many user install my app.

If I know a method can run when user uninstall my app, then I will send message to my server to let me know this user remove my app, also I can count how many people have uninstall/remove my app everyday.

Thanks for your help.

I am launched Worklight app on the android device. I have included Worklight Settings on applications-descriptor.html for Android platform.

I haven't found nothing on the Android Settings.

How I can open Worklight Settings of the app to change something, like a Worklight Server URL?

I am new to Android development and just installed the new Android sdk with Eclipse and ADT bundle. From this question, I knew about installing Intel X86 system image.But, I have one confusion about installing intel x86 for which I coudn't find any solution on internet.

In SDK manager, Intel x86 system image installation options are shown for each API level as in the picture.

image is here (since I have no privileges. Hope someone will correct it)

So, my question is that Do we need to have intel x86 system image for all API levels?
(I am making an application which will support from Android ICs to kitkat.)